Test Report: Docker_Linux_containerd_arm64 22122

                    
                      022dd2780ab8206ac68153a1ee37fdbcc6da7ccd:2025-12-13:42761
                    
                

Test fail (34/417)

Order failed test Duration
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 501.64
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 367.91
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.34
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.2
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.21
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 735.79
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.23
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.05
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.73
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.08
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.39
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.7
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 3
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.08
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.31
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.3
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.33
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.34
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.34
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.55
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.16
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 97.73
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.22
358 TestKubernetesUpgrade 798.26
404 TestStartStop/group/no-preload/serial/FirstStart 512.77
437 TestStartStop/group/newest-cni/serial/FirstStart 501.56
438 TestStartStop/group/no-preload/serial/DeployApp 3.16
439 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 114.04
442 TestStartStop/group/no-preload/serial/SecondStart 369.88
444 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 100.16
445 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.31
448 TestStartStop/group/newest-cni/serial/SecondStart 373.49
452 TestStartStop/group/newest-cni/serial/Pause 9.68
459 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 258.5
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (501.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-562018 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1213 14:41:26.423517 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:43:42.554986 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:44:10.266695 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:18.175069 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:18.181592 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:18.193154 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:18.214555 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:18.256095 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:18.337617 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:18.499208 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:18.821016 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:19.463093 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:20.744701 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:23.306458 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:28.428809 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:38.670099 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:59.151623 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:46:40.114096 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:48:02.038469 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:48:42.554842 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-562018 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m20.203306967s)

                                                
                                                
-- stdout --
	* [functional-562018] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-562018" primary control-plane node in "functional-562018" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Found network options:
	  - HTTP_PROXY=localhost:39059
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:39059 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-562018 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-562018 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226262s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001274207s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001274207s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-562018 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-562018
helpers_test.go:244: (dbg) docker inspect functional-562018:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
	        "Created": "2025-12-13T14:41:15.451086653Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1291703,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T14:41:15.527927053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hostname",
	        "HostsPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hosts",
	        "LogPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648-json.log",
	        "Name": "/functional-562018",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-562018:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-562018",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
	                "LowerDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-562018",
	                "Source": "/var/lib/docker/volumes/functional-562018/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-562018",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-562018",
	                "name.minikube.sigs.k8s.io": "functional-562018",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f4b22297a29553cdd0dbc4eaa766abcdb1e67465ee18f1e2f5f9917dc8cf6d08",
	            "SandboxKey": "/var/run/docker/netns/f4b22297a295",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33918"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33919"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33922"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33920"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33921"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-562018": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:f3:95:ff:30:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd2e7aed753870546b4bccdeed8073ad5795c14334e906d06b964f72dc448c38",
	                    "EndpointID": "a0e947c1e40f773105c811b67b7d1d63f19d3a20060380bbde944bf9bfe39be5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-562018",
	                        "2cd1277ca783"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018: exit status 6 (309.892653ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 14:49:31.005267 1296774 status.go:458] kubeconfig endpoint: get endpoint: "functional-562018" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-831661 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:40 UTC │
	│ ssh            │ functional-831661 ssh sudo cat /etc/ssl/certs/12529342.pem                                                                                                      │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:40 UTC │
	│ image          │ functional-831661 image ls                                                                                                                                      │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:40 UTC │
	│ ssh            │ functional-831661 ssh sudo cat /usr/share/ca-certificates/12529342.pem                                                                                          │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:40 UTC │
	│ ssh            │ functional-831661 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image load --daemon kicbase/echo-server:functional-831661 --alsologtostderr                                                                   │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image ls                                                                                                                                      │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image save kicbase/echo-server:functional-831661 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image rm kicbase/echo-server:functional-831661 --alsologtostderr                                                                              │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ update-context │ functional-831661 update-context --alsologtostderr -v=2                                                                                                         │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image ls                                                                                                                                      │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ update-context │ functional-831661 update-context --alsologtostderr -v=2                                                                                                         │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ update-context │ functional-831661 update-context --alsologtostderr -v=2                                                                                                         │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image ls                                                                                                                                      │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image save --daemon kicbase/echo-server:functional-831661 --alsologtostderr                                                                   │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image ls --format json --alsologtostderr                                                                                                      │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image ls --format short --alsologtostderr                                                                                                     │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image ls --format table --alsologtostderr                                                                                                     │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ ssh            │ functional-831661 ssh pgrep buildkitd                                                                                                                           │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │                     │
	│ image          │ functional-831661 image ls --format yaml --alsologtostderr                                                                                                      │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image build -t localhost/my-image:functional-831661 testdata/build --alsologtostderr                                                          │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image ls                                                                                                                                      │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ delete         │ -p functional-831661                                                                                                                                            │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ start          │ -p functional-562018 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0         │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 14:41:10
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 14:41:10.536352 1291317 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:41:10.536463 1291317 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:41:10.536467 1291317 out.go:374] Setting ErrFile to fd 2...
	I1213 14:41:10.536471 1291317 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:41:10.536759 1291317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 14:41:10.537159 1291317 out.go:368] Setting JSON to false
	I1213 14:41:10.537974 1291317 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23019,"bootTime":1765613851,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 14:41:10.538032 1291317 start.go:143] virtualization:  
	I1213 14:41:10.542681 1291317 out.go:179] * [functional-562018] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 14:41:10.547211 1291317 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 14:41:10.547327 1291317 notify.go:221] Checking for updates...
	I1213 14:41:10.554065 1291317 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:41:10.557348 1291317 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:41:10.560675 1291317 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 14:41:10.563856 1291317 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 14:41:10.567020 1291317 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 14:41:10.570333 1291317 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:41:10.593803 1291317 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 14:41:10.593920 1291317 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:41:10.659445 1291317 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-13 14:41:10.65054644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:41:10.659539 1291317 docker.go:319] overlay module found
	I1213 14:41:10.663023 1291317 out.go:179] * Using the docker driver based on user configuration
	I1213 14:41:10.665952 1291317 start.go:309] selected driver: docker
	I1213 14:41:10.665960 1291317 start.go:927] validating driver "docker" against <nil>
	I1213 14:41:10.665972 1291317 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 14:41:10.666736 1291317 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:41:10.720114 1291317 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-13 14:41:10.711434155 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:41:10.720256 1291317 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 14:41:10.720471 1291317 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 14:41:10.723664 1291317 out.go:179] * Using Docker driver with root privileges
	I1213 14:41:10.726526 1291317 cni.go:84] Creating CNI manager for ""
	I1213 14:41:10.726586 1291317 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 14:41:10.726594 1291317 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 14:41:10.726672 1291317 start.go:353] cluster config:
	{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:41:10.729831 1291317 out.go:179] * Starting "functional-562018" primary control-plane node in "functional-562018" cluster
	I1213 14:41:10.732772 1291317 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 14:41:10.735803 1291317 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 14:41:10.738708 1291317 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 14:41:10.738744 1291317 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 14:41:10.738760 1291317 cache.go:65] Caching tarball of preloaded images
	I1213 14:41:10.738800 1291317 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 14:41:10.738843 1291317 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 14:41:10.738853 1291317 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 14:41:10.739197 1291317 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/config.json ...
	I1213 14:41:10.739214 1291317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/config.json: {Name:mka487a9cc8c41f7613c6f5f9d1fe183d2b5e51b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:41:10.761129 1291317 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 14:41:10.761147 1291317 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 14:41:10.761168 1291317 cache.go:243] Successfully downloaded all kic artifacts
	I1213 14:41:10.761197 1291317 start.go:360] acquireMachinesLock for functional-562018: {Name:mk6a7956e4fce5d8e0f4d6fe039ab67ad6cd688b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 14:41:10.761309 1291317 start.go:364] duration metric: took 97.729µs to acquireMachinesLock for "functional-562018"
	I1213 14:41:10.761331 1291317 start.go:93] Provisioning new machine with config: &{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 14:41:10.761394 1291317 start.go:125] createHost starting for "" (driver="docker")
	I1213 14:41:10.764952 1291317 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1213 14:41:10.765226 1291317 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:39059 to docker env.
	I1213 14:41:10.765251 1291317 start.go:159] libmachine.API.Create for "functional-562018" (driver="docker")
	I1213 14:41:10.765271 1291317 client.go:173] LocalClient.Create starting
	I1213 14:41:10.765335 1291317 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem
	I1213 14:41:10.765365 1291317 main.go:143] libmachine: Decoding PEM data...
	I1213 14:41:10.765378 1291317 main.go:143] libmachine: Parsing certificate...
	I1213 14:41:10.765439 1291317 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem
	I1213 14:41:10.765454 1291317 main.go:143] libmachine: Decoding PEM data...
	I1213 14:41:10.765464 1291317 main.go:143] libmachine: Parsing certificate...
	I1213 14:41:10.765817 1291317 cli_runner.go:164] Run: docker network inspect functional-562018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 14:41:10.783804 1291317 cli_runner.go:211] docker network inspect functional-562018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 14:41:10.783877 1291317 network_create.go:284] running [docker network inspect functional-562018] to gather additional debugging logs...
	I1213 14:41:10.783893 1291317 cli_runner.go:164] Run: docker network inspect functional-562018
	W1213 14:41:10.800008 1291317 cli_runner.go:211] docker network inspect functional-562018 returned with exit code 1
	I1213 14:41:10.800029 1291317 network_create.go:287] error running [docker network inspect functional-562018]: docker network inspect functional-562018: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-562018 not found
	I1213 14:41:10.800041 1291317 network_create.go:289] output of [docker network inspect functional-562018]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-562018 not found
	
	** /stderr **
	I1213 14:41:10.800151 1291317 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 14:41:10.816198 1291317 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018b3fa0}
	I1213 14:41:10.816231 1291317 network_create.go:124] attempt to create docker network functional-562018 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1213 14:41:10.816288 1291317 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-562018 functional-562018
	I1213 14:41:10.879416 1291317 network_create.go:108] docker network functional-562018 192.168.49.0/24 created
	I1213 14:41:10.879438 1291317 kic.go:121] calculated static IP "192.168.49.2" for the "functional-562018" container
	I1213 14:41:10.879516 1291317 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 14:41:10.894192 1291317 cli_runner.go:164] Run: docker volume create functional-562018 --label name.minikube.sigs.k8s.io=functional-562018 --label created_by.minikube.sigs.k8s.io=true
	I1213 14:41:10.912069 1291317 oci.go:103] Successfully created a docker volume functional-562018
	I1213 14:41:10.912160 1291317 cli_runner.go:164] Run: docker run --rm --name functional-562018-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-562018 --entrypoint /usr/bin/test -v functional-562018:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 14:41:11.474739 1291317 oci.go:107] Successfully prepared a docker volume functional-562018
	I1213 14:41:11.474794 1291317 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 14:41:11.474801 1291317 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 14:41:11.474874 1291317 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-562018:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 14:41:15.364284 1291317 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-562018:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.889376932s)
	I1213 14:41:15.364306 1291317 kic.go:203] duration metric: took 3.889500991s to extract preloaded images to volume ...
	W1213 14:41:15.364459 1291317 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 14:41:15.364573 1291317 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 14:41:15.432829 1291317 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-562018 --name functional-562018 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-562018 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-562018 --network functional-562018 --ip 192.168.49.2 --volume functional-562018:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 14:41:15.748253 1291317 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Running}}
	I1213 14:41:15.771512 1291317 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:41:15.798715 1291317 cli_runner.go:164] Run: docker exec functional-562018 stat /var/lib/dpkg/alternatives/iptables
	I1213 14:41:15.855627 1291317 oci.go:144] the created container "functional-562018" has a running status.
	I1213 14:41:15.855646 1291317 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa...
	I1213 14:41:16.480373 1291317 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 14:41:16.500182 1291317 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:41:16.517859 1291317 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 14:41:16.517870 1291317 kic_runner.go:114] Args: [docker exec --privileged functional-562018 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 14:41:16.558915 1291317 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:41:16.576188 1291317 machine.go:94] provisionDockerMachine start ...
	I1213 14:41:16.576293 1291317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:41:16.593933 1291317 main.go:143] libmachine: Using SSH client type: native
	I1213 14:41:16.594265 1291317 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:41:16.594272 1291317 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 14:41:16.594898 1291317 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40720->127.0.0.1:33918: read: connection reset by peer
	I1213 14:41:19.747043 1291317 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-562018
	
	I1213 14:41:19.747065 1291317 ubuntu.go:182] provisioning hostname "functional-562018"
	I1213 14:41:19.747156 1291317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:41:19.764757 1291317 main.go:143] libmachine: Using SSH client type: native
	I1213 14:41:19.765076 1291317 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:41:19.765085 1291317 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-562018 && echo "functional-562018" | sudo tee /etc/hostname
	I1213 14:41:19.924377 1291317 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-562018
	
	I1213 14:41:19.924445 1291317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:41:19.941925 1291317 main.go:143] libmachine: Using SSH client type: native
	I1213 14:41:19.942241 1291317 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:41:19.942255 1291317 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-562018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-562018/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-562018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 14:41:20.099938 1291317 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 14:41:20.099953 1291317 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 14:41:20.099991 1291317 ubuntu.go:190] setting up certificates
	I1213 14:41:20.100001 1291317 provision.go:84] configureAuth start
	I1213 14:41:20.100068 1291317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
	I1213 14:41:20.118150 1291317 provision.go:143] copyHostCerts
	I1213 14:41:20.118219 1291317 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 14:41:20.118226 1291317 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 14:41:20.118306 1291317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 14:41:20.118426 1291317 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 14:41:20.118431 1291317 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 14:41:20.118459 1291317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 14:41:20.118516 1291317 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 14:41:20.118519 1291317 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 14:41:20.118542 1291317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 14:41:20.118592 1291317 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.functional-562018 san=[127.0.0.1 192.168.49.2 functional-562018 localhost minikube]
	I1213 14:41:20.318865 1291317 provision.go:177] copyRemoteCerts
	I1213 14:41:20.318922 1291317 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 14:41:20.318969 1291317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:41:20.336603 1291317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:41:20.439060 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 14:41:20.456521 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 14:41:20.473839 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 14:41:20.491254 1291317 provision.go:87] duration metric: took 391.229515ms to configureAuth
	I1213 14:41:20.491271 1291317 ubuntu.go:206] setting minikube options for container-runtime
	I1213 14:41:20.491476 1291317 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:41:20.491484 1291317 machine.go:97] duration metric: took 3.915285318s to provisionDockerMachine
	I1213 14:41:20.491490 1291317 client.go:176] duration metric: took 9.726214447s to LocalClient.Create
	I1213 14:41:20.491503 1291317 start.go:167] duration metric: took 9.726252206s to libmachine.API.Create "functional-562018"
	I1213 14:41:20.491509 1291317 start.go:293] postStartSetup for "functional-562018" (driver="docker")
	I1213 14:41:20.491526 1291317 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 14:41:20.491573 1291317 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 14:41:20.491615 1291317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:41:20.508801 1291317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:41:20.615257 1291317 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 14:41:20.618550 1291317 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 14:41:20.618568 1291317 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 14:41:20.618579 1291317 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 14:41:20.618635 1291317 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 14:41:20.618725 1291317 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 14:41:20.618819 1291317 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts -> hosts in /etc/test/nested/copy/1252934
	I1213 14:41:20.618862 1291317 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1252934
	I1213 14:41:20.626510 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 14:41:20.643629 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts --> /etc/test/nested/copy/1252934/hosts (40 bytes)
	I1213 14:41:20.660823 1291317 start.go:296] duration metric: took 169.299563ms for postStartSetup
	I1213 14:41:20.661177 1291317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
	I1213 14:41:20.678331 1291317 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/config.json ...
	I1213 14:41:20.678636 1291317 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 14:41:20.678684 1291317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:41:20.695728 1291317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:41:20.796214 1291317 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 14:41:20.801020 1291317 start.go:128] duration metric: took 10.039612701s to createHost
	I1213 14:41:20.801036 1291317 start.go:83] releasing machines lock for "functional-562018", held for 10.039719948s
	I1213 14:41:20.801105 1291317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
	I1213 14:41:20.821828 1291317 out.go:179] * Found network options:
	I1213 14:41:20.824876 1291317 out.go:179]   - HTTP_PROXY=localhost:39059
	W1213 14:41:20.827760 1291317 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1213 14:41:20.830604 1291317 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1213 14:41:20.833551 1291317 ssh_runner.go:195] Run: cat /version.json
	I1213 14:41:20.833592 1291317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:41:20.833601 1291317 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 14:41:20.833650 1291317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:41:20.860119 1291317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:41:20.861424 1291317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:41:21.050374 1291317 ssh_runner.go:195] Run: systemctl --version
	I1213 14:41:21.056838 1291317 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 14:41:21.061240 1291317 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 14:41:21.061303 1291317 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 14:41:21.088779 1291317 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 14:41:21.088802 1291317 start.go:496] detecting cgroup driver to use...
	I1213 14:41:21.088834 1291317 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 14:41:21.088893 1291317 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 14:41:21.104109 1291317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 14:41:21.117080 1291317 docker.go:218] disabling cri-docker service (if available) ...
	I1213 14:41:21.117133 1291317 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 14:41:21.134784 1291317 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 14:41:21.153477 1291317 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 14:41:21.272360 1291317 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 14:41:21.392638 1291317 docker.go:234] disabling docker service ...
	I1213 14:41:21.392701 1291317 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 14:41:21.414118 1291317 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 14:41:21.427448 1291317 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 14:41:21.550055 1291317 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 14:41:21.671395 1291317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 14:41:21.683908 1291317 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 14:41:21.698098 1291317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 14:41:21.707263 1291317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 14:41:21.717101 1291317 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 14:41:21.717165 1291317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 14:41:21.726164 1291317 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 14:41:21.734973 1291317 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 14:41:21.743567 1291317 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 14:41:21.752005 1291317 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 14:41:21.760444 1291317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 14:41:21.769373 1291317 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 14:41:21.778366 1291317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 14:41:21.787216 1291317 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 14:41:21.794474 1291317 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 14:41:21.801928 1291317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:41:21.906989 1291317 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 14:41:22.043619 1291317 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 14:41:22.043692 1291317 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 14:41:22.047622 1291317 start.go:564] Will wait 60s for crictl version
	I1213 14:41:22.047678 1291317 ssh_runner.go:195] Run: which crictl
	I1213 14:41:22.051506 1291317 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 14:41:22.076872 1291317 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 14:41:22.076938 1291317 ssh_runner.go:195] Run: containerd --version
	I1213 14:41:22.099670 1291317 ssh_runner.go:195] Run: containerd --version
	I1213 14:41:22.123011 1291317 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 14:41:22.125955 1291317 cli_runner.go:164] Run: docker network inspect functional-562018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 14:41:22.141441 1291317 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 14:41:22.145215 1291317 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 14:41:22.154868 1291317 kubeadm.go:884] updating cluster {Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 14:41:22.154988 1291317 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 14:41:22.155052 1291317 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:41:22.179490 1291317 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 14:41:22.179502 1291317 containerd.go:534] Images already preloaded, skipping extraction
	I1213 14:41:22.179561 1291317 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:41:22.203593 1291317 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 14:41:22.203607 1291317 cache_images.go:86] Images are preloaded, skipping loading
	I1213 14:41:22.203613 1291317 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 14:41:22.203701 1291317 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-562018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 14:41:22.203765 1291317 ssh_runner.go:195] Run: sudo crictl info
	I1213 14:41:22.232840 1291317 cni.go:84] Creating CNI manager for ""
	I1213 14:41:22.232850 1291317 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 14:41:22.232870 1291317 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 14:41:22.232892 1291317 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-562018 NodeName:functional-562018 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 14:41:22.232999 1291317 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-562018"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 14:41:22.233066 1291317 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 14:41:22.240871 1291317 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 14:41:22.240931 1291317 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 14:41:22.248588 1291317 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 14:41:22.261453 1291317 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 14:41:22.274286 1291317 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 14:41:22.287123 1291317 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 14:41:22.290857 1291317 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 14:41:22.300507 1291317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:41:22.408361 1291317 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:41:22.424556 1291317 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018 for IP: 192.168.49.2
	I1213 14:41:22.424567 1291317 certs.go:195] generating shared ca certs ...
	I1213 14:41:22.424582 1291317 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:41:22.424712 1291317 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 14:41:22.424751 1291317 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 14:41:22.424757 1291317 certs.go:257] generating profile certs ...
	I1213 14:41:22.424814 1291317 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key
	I1213 14:41:22.424822 1291317 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt with IP's: []
	I1213 14:41:22.806904 1291317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt ...
	I1213 14:41:22.806922 1291317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: {Name:mk5ab195bf1a7056b153a6bbf68eee9801937361 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:41:22.807131 1291317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key ...
	I1213 14:41:22.807138 1291317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key: {Name:mkb108d763016aabf0c2fbb9da04655d4ad7bb8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:41:22.807232 1291317 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key.d0505aee
	I1213 14:41:22.807244 1291317 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt.d0505aee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1213 14:41:23.082234 1291317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt.d0505aee ...
	I1213 14:41:23.082257 1291317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt.d0505aee: {Name:mkc9642dd6a076d13a01d0176e4833c78b56f473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:41:23.082459 1291317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key.d0505aee ...
	I1213 14:41:23.082467 1291317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key.d0505aee: {Name:mkd82d675c96c67f08f95e66a204f12bd06128cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:41:23.082561 1291317 certs.go:382] copying /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt.d0505aee -> /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt
	I1213 14:41:23.082634 1291317 certs.go:386] copying /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key.d0505aee -> /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key
	I1213 14:41:23.082704 1291317 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key
	I1213 14:41:23.082721 1291317 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.crt with IP's: []
	I1213 14:41:23.386520 1291317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.crt ...
	I1213 14:41:23.386536 1291317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.crt: {Name:mk38f920eb041bcd85320119d02a87fad63b434a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:41:23.386712 1291317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key ...
	I1213 14:41:23.386720 1291317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key: {Name:mk067b9e8d42c9dea7e4b5defd6063b282b5adbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:41:23.386896 1291317 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 14:41:23.386935 1291317 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 14:41:23.386942 1291317 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 14:41:23.386971 1291317 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 14:41:23.386993 1291317 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 14:41:23.387016 1291317 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 14:41:23.387059 1291317 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 14:41:23.387752 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 14:41:23.408658 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 14:41:23.427993 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 14:41:23.446668 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 14:41:23.464719 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 14:41:23.482423 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 14:41:23.499968 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 14:41:23.517971 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 14:41:23.536190 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 14:41:23.554570 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 14:41:23.572670 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 14:41:23.590516 1291317 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 14:41:23.603832 1291317 ssh_runner.go:195] Run: openssl version
	I1213 14:41:23.610392 1291317 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:41:23.618129 1291317 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 14:41:23.625535 1291317 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:41:23.629179 1291317 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:41:23.629234 1291317 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:41:23.670264 1291317 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 14:41:23.677744 1291317 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 14:41:23.685265 1291317 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 14:41:23.692739 1291317 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 14:41:23.700339 1291317 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 14:41:23.704057 1291317 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 14:41:23.704130 1291317 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 14:41:23.744828 1291317 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 14:41:23.752071 1291317 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1252934.pem /etc/ssl/certs/51391683.0
	I1213 14:41:23.759251 1291317 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 14:41:23.766818 1291317 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 14:41:23.774446 1291317 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 14:41:23.778137 1291317 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 14:41:23.778213 1291317 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 14:41:23.821193 1291317 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 14:41:23.828637 1291317 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12529342.pem /etc/ssl/certs/3ec20f2e.0
	I1213 14:41:23.835987 1291317 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 14:41:23.839834 1291317 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 14:41:23.839887 1291317 kubeadm.go:401] StartCluster: {Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:41:23.839966 1291317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 14:41:23.840026 1291317 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 14:41:23.866656 1291317 cri.go:89] found id: ""
	I1213 14:41:23.866717 1291317 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 14:41:23.874680 1291317 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 14:41:23.882664 1291317 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 14:41:23.882737 1291317 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 14:41:23.890810 1291317 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 14:41:23.890821 1291317 kubeadm.go:158] found existing configuration files:
	
	I1213 14:41:23.890896 1291317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 14:41:23.898898 1291317 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 14:41:23.898961 1291317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 14:41:23.906510 1291317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 14:41:23.914523 1291317 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 14:41:23.914589 1291317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 14:41:23.922447 1291317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 14:41:23.930721 1291317 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 14:41:23.930780 1291317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 14:41:23.938490 1291317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 14:41:23.946556 1291317 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 14:41:23.946617 1291317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 14:41:23.954295 1291317 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 14:41:23.995097 1291317 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 14:41:23.995542 1291317 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 14:41:24.096930 1291317 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 14:41:24.096999 1291317 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 14:41:24.097033 1291317 kubeadm.go:319] OS: Linux
	I1213 14:41:24.097093 1291317 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 14:41:24.097168 1291317 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 14:41:24.097221 1291317 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 14:41:24.097278 1291317 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 14:41:24.097328 1291317 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 14:41:24.097384 1291317 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 14:41:24.097428 1291317 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 14:41:24.097475 1291317 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 14:41:24.097528 1291317 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 14:41:24.172417 1291317 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 14:41:24.172520 1291317 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 14:41:24.172635 1291317 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 14:41:24.183685 1291317 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 14:41:24.190148 1291317 out.go:252]   - Generating certificates and keys ...
	I1213 14:41:24.190238 1291317 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 14:41:24.190303 1291317 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 14:41:24.470926 1291317 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 14:41:24.607717 1291317 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 14:41:24.862134 1291317 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 14:41:24.932002 1291317 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 14:41:25.274699 1291317 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 14:41:25.274855 1291317 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-562018 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 14:41:25.429840 1291317 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 14:41:25.429993 1291317 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-562018 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 14:41:26.039786 1291317 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 14:41:26.294725 1291317 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 14:41:26.555089 1291317 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 14:41:26.555232 1291317 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 14:41:26.737194 1291317 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 14:41:27.132301 1291317 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 14:41:27.252866 1291317 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 14:41:27.409575 1291317 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 14:41:27.703301 1291317 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 14:41:27.704048 1291317 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 14:41:27.706873 1291317 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 14:41:27.710780 1291317 out.go:252]   - Booting up control plane ...
	I1213 14:41:27.710880 1291317 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 14:41:27.710956 1291317 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 14:41:27.711022 1291317 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 14:41:27.727883 1291317 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 14:41:27.728167 1291317 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 14:41:27.737323 1291317 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 14:41:27.738013 1291317 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 14:41:27.738316 1291317 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 14:41:27.877372 1291317 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 14:41:27.877486 1291317 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 14:45:27.878528 1291317 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001226262s
	I1213 14:45:27.878549 1291317 kubeadm.go:319] 
	I1213 14:45:27.878605 1291317 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 14:45:27.878638 1291317 kubeadm.go:319] 	- The kubelet is not running
	I1213 14:45:27.878741 1291317 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 14:45:27.878745 1291317 kubeadm.go:319] 
	I1213 14:45:27.878848 1291317 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 14:45:27.878879 1291317 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 14:45:27.878909 1291317 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 14:45:27.878912 1291317 kubeadm.go:319] 
	I1213 14:45:27.884102 1291317 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 14:45:27.884844 1291317 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 14:45:27.885034 1291317 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 14:45:27.885456 1291317 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 14:45:27.885464 1291317 kubeadm.go:319] 
	I1213 14:45:27.885583 1291317 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 14:45:27.885709 1291317 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-562018 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-562018 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226262s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 14:45:27.885811 1291317 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 14:45:28.298047 1291317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 14:45:28.311884 1291317 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 14:45:28.311946 1291317 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 14:45:28.320182 1291317 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 14:45:28.320191 1291317 kubeadm.go:158] found existing configuration files:
	
	I1213 14:45:28.320243 1291317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 14:45:28.327851 1291317 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 14:45:28.327911 1291317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 14:45:28.335606 1291317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 14:45:28.343859 1291317 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 14:45:28.343917 1291317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 14:45:28.351548 1291317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 14:45:28.359482 1291317 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 14:45:28.359537 1291317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 14:45:28.367254 1291317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 14:45:28.375153 1291317 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 14:45:28.375225 1291317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 14:45:28.382672 1291317 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 14:45:28.423941 1291317 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 14:45:28.423992 1291317 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 14:45:28.503918 1291317 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 14:45:28.503984 1291317 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 14:45:28.504018 1291317 kubeadm.go:319] OS: Linux
	I1213 14:45:28.504062 1291317 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 14:45:28.504109 1291317 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 14:45:28.504155 1291317 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 14:45:28.504202 1291317 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 14:45:28.504248 1291317 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 14:45:28.504302 1291317 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 14:45:28.504345 1291317 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 14:45:28.504392 1291317 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 14:45:28.504436 1291317 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 14:45:28.578183 1291317 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 14:45:28.578287 1291317 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 14:45:28.578376 1291317 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 14:45:28.587730 1291317 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 14:45:28.593187 1291317 out.go:252]   - Generating certificates and keys ...
	I1213 14:45:28.593294 1291317 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 14:45:28.593366 1291317 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 14:45:28.593463 1291317 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 14:45:28.593524 1291317 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 14:45:28.593605 1291317 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 14:45:28.593658 1291317 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 14:45:28.593726 1291317 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 14:45:28.593798 1291317 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 14:45:28.593877 1291317 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 14:45:28.593955 1291317 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 14:45:28.593998 1291317 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 14:45:28.594058 1291317 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 14:45:28.823814 1291317 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 14:45:29.028475 1291317 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 14:45:29.231229 1291317 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 14:45:29.658022 1291317 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 14:45:30.068058 1291317 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 14:45:30.068793 1291317 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 14:45:30.071804 1291317 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 14:45:30.075282 1291317 out.go:252]   - Booting up control plane ...
	I1213 14:45:30.075406 1291317 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 14:45:30.075483 1291317 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 14:45:30.075548 1291317 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 14:45:30.099243 1291317 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 14:45:30.099378 1291317 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 14:45:30.108440 1291317 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 14:45:30.108932 1291317 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 14:45:30.109205 1291317 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 14:45:30.237905 1291317 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 14:45:30.238019 1291317 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 14:49:30.238851 1291317 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001274207s
	I1213 14:49:30.238872 1291317 kubeadm.go:319] 
	I1213 14:49:30.238928 1291317 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 14:49:30.238960 1291317 kubeadm.go:319] 	- The kubelet is not running
	I1213 14:49:30.239064 1291317 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 14:49:30.239068 1291317 kubeadm.go:319] 
	I1213 14:49:30.239204 1291317 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 14:49:30.239245 1291317 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 14:49:30.239276 1291317 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 14:49:30.239279 1291317 kubeadm.go:319] 
	I1213 14:49:30.243559 1291317 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 14:49:30.244038 1291317 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 14:49:30.244156 1291317 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 14:49:30.244401 1291317 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 14:49:30.244405 1291317 kubeadm.go:319] 
	I1213 14:49:30.244474 1291317 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 14:49:30.244539 1291317 kubeadm.go:403] duration metric: took 8m6.40465461s to StartCluster
	I1213 14:49:30.244574 1291317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:49:30.244645 1291317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:49:30.269858 1291317 cri.go:89] found id: ""
	I1213 14:49:30.269885 1291317 logs.go:282] 0 containers: []
	W1213 14:49:30.269892 1291317 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:49:30.269897 1291317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:49:30.269957 1291317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:49:30.295913 1291317 cri.go:89] found id: ""
	I1213 14:49:30.295927 1291317 logs.go:282] 0 containers: []
	W1213 14:49:30.295934 1291317 logs.go:284] No container was found matching "etcd"
	I1213 14:49:30.295939 1291317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:49:30.296006 1291317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:49:30.321841 1291317 cri.go:89] found id: ""
	I1213 14:49:30.321855 1291317 logs.go:282] 0 containers: []
	W1213 14:49:30.321862 1291317 logs.go:284] No container was found matching "coredns"
	I1213 14:49:30.321867 1291317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:49:30.321927 1291317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:49:30.358313 1291317 cri.go:89] found id: ""
	I1213 14:49:30.358327 1291317 logs.go:282] 0 containers: []
	W1213 14:49:30.358334 1291317 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:49:30.358339 1291317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:49:30.358397 1291317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:49:30.390205 1291317 cri.go:89] found id: ""
	I1213 14:49:30.390219 1291317 logs.go:282] 0 containers: []
	W1213 14:49:30.390227 1291317 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:49:30.390232 1291317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:49:30.390292 1291317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:49:30.421227 1291317 cri.go:89] found id: ""
	I1213 14:49:30.421242 1291317 logs.go:282] 0 containers: []
	W1213 14:49:30.421250 1291317 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:49:30.421255 1291317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:49:30.421318 1291317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:49:30.446736 1291317 cri.go:89] found id: ""
	I1213 14:49:30.446756 1291317 logs.go:282] 0 containers: []
	W1213 14:49:30.446765 1291317 logs.go:284] No container was found matching "kindnet"
	I1213 14:49:30.446775 1291317 logs.go:123] Gathering logs for kubelet ...
	I1213 14:49:30.446785 1291317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:49:30.506280 1291317 logs.go:123] Gathering logs for dmesg ...
	I1213 14:49:30.506300 1291317 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:49:30.524074 1291317 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:49:30.524091 1291317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:49:30.591576 1291317 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:49:30.582607    4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:49:30.584266    4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:49:30.584973    4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:49:30.586642    4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:49:30.587613    4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:49:30.582607    4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:49:30.584266    4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:49:30.584973    4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:49:30.586642    4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:49:30.587613    4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:49:30.591587 1291317 logs.go:123] Gathering logs for containerd ...
	I1213 14:49:30.591600 1291317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:49:30.628672 1291317 logs.go:123] Gathering logs for container status ...
	I1213 14:49:30.628695 1291317 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 14:49:30.656249 1291317 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001274207s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 14:49:30.656305 1291317 out.go:285] * 
	W1213 14:49:30.659411 1291317 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001274207s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 14:49:30.659450 1291317 out.go:285] * 
	W1213 14:49:30.661795 1291317 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 14:49:30.666707 1291317 out.go:203] 
	W1213 14:49:30.670410 1291317 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001274207s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 14:49:30.670458 1291317 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 14:49:30.670476 1291317 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 14:49:30.673588 1291317 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.975014211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.975030859Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.975067593Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.975081886Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.975092060Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.975103170Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.975111965Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.975121885Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.975138172Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.975169991Z" level=info msg="Connect containerd service"
	Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.975508085Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.976079298Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.993990477Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.994055034Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.994084490Z" level=info msg="Start subscribing containerd event"
	Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.994130454Z" level=info msg="Start recovering state"
	Dec 13 14:41:22 functional-562018 containerd[763]: time="2025-12-13T14:41:22.040971827Z" level=info msg="Start event monitor"
	Dec 13 14:41:22 functional-562018 containerd[763]: time="2025-12-13T14:41:22.041022460Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 14:41:22 functional-562018 containerd[763]: time="2025-12-13T14:41:22.041031650Z" level=info msg="Start streaming server"
	Dec 13 14:41:22 functional-562018 containerd[763]: time="2025-12-13T14:41:22.041041209Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 14:41:22 functional-562018 containerd[763]: time="2025-12-13T14:41:22.041050004Z" level=info msg="runtime interface starting up..."
	Dec 13 14:41:22 functional-562018 containerd[763]: time="2025-12-13T14:41:22.041062878Z" level=info msg="starting plugins..."
	Dec 13 14:41:22 functional-562018 containerd[763]: time="2025-12-13T14:41:22.041083423Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 14:41:22 functional-562018 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 14:41:22 functional-562018 containerd[763]: time="2025-12-13T14:41:22.043229514Z" level=info msg="containerd successfully booted in 0.089795s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:49:31.638154    4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:49:31.638541    4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:49:31.639918    4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:49:31.640541    4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:49:31.642121    4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.759368] overlayfs: idmapped layers are currently not supported
	[Dec13 13:37] overlayfs: idmapped layers are currently not supported
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 14:49:31 up  6:32,  0 user,  load average: 0.25, 0.58, 1.06
	Linux functional-562018 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 14:49:28 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 14:49:28 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 13 14:49:28 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:49:28 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:49:28 functional-562018 kubelet[4678]: E1213 14:49:28.879760    4678 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 14:49:28 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 14:49:28 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 14:49:29 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 13 14:49:29 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:49:29 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:49:29 functional-562018 kubelet[4683]: E1213 14:49:29.637317    4683 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 14:49:29 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 14:49:29 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 14:49:30 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 13 14:49:30 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:49:30 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:49:30 functional-562018 kubelet[4716]: E1213 14:49:30.400289    4716 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 14:49:30 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 14:49:30 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 14:49:31 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 13 14:49:31 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:49:31 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:49:31 functional-562018 kubelet[4788]: E1213 14:49:31.156130    4788 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 14:49:31 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 14:49:31 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018: exit status 6 (344.998217ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 14:49:32.110359 1296988 status.go:458] kubeconfig endpoint: get endpoint: "functional-562018" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-562018" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (501.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (367.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1213 14:49:32.127046 1252934 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-562018 --alsologtostderr -v=8
E1213 14:50:18.171562 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:50:45.880603 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:53:42.552823 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:55:05.628176 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:55:18.171372 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-562018 --alsologtostderr -v=8: exit status 80 (6m5.234062287s)

                                                
                                                
-- stdout --
	* [functional-562018] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-562018" primary control-plane node in "functional-562018" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 14:49:32.175934 1297065 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:49:32.176062 1297065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:49:32.176074 1297065 out.go:374] Setting ErrFile to fd 2...
	I1213 14:49:32.176081 1297065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:49:32.176329 1297065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 14:49:32.176775 1297065 out.go:368] Setting JSON to false
	I1213 14:49:32.177662 1297065 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23521,"bootTime":1765613851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 14:49:32.177756 1297065 start.go:143] virtualization:  
	I1213 14:49:32.181250 1297065 out.go:179] * [functional-562018] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 14:49:32.184279 1297065 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 14:49:32.184349 1297065 notify.go:221] Checking for updates...
	I1213 14:49:32.190681 1297065 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:49:32.193733 1297065 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:32.196589 1297065 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 14:49:32.199444 1297065 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 14:49:32.202364 1297065 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 14:49:32.205680 1297065 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:49:32.205788 1297065 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:49:32.233101 1297065 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 14:49:32.233224 1297065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:49:32.299716 1297065 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 14:49:32.290425951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:49:32.299832 1297065 docker.go:319] overlay module found
	I1213 14:49:32.305094 1297065 out.go:179] * Using the docker driver based on existing profile
	I1213 14:49:32.307726 1297065 start.go:309] selected driver: docker
	I1213 14:49:32.307744 1297065 start.go:927] validating driver "docker" against &{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:49:32.307856 1297065 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 14:49:32.307958 1297065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:49:32.364202 1297065 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 14:49:32.354888078 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:49:32.364608 1297065 cni.go:84] Creating CNI manager for ""
	I1213 14:49:32.364673 1297065 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 14:49:32.364721 1297065 start.go:353] cluster config:
	{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:49:32.367887 1297065 out.go:179] * Starting "functional-562018" primary control-plane node in "functional-562018" cluster
	I1213 14:49:32.370579 1297065 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 14:49:32.373599 1297065 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 14:49:32.376553 1297065 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 14:49:32.376606 1297065 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 14:49:32.376621 1297065 cache.go:65] Caching tarball of preloaded images
	I1213 14:49:32.376630 1297065 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 14:49:32.376703 1297065 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 14:49:32.376713 1297065 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 14:49:32.376820 1297065 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/config.json ...
	I1213 14:49:32.396105 1297065 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 14:49:32.396128 1297065 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 14:49:32.396160 1297065 cache.go:243] Successfully downloaded all kic artifacts
	I1213 14:49:32.396191 1297065 start.go:360] acquireMachinesLock for functional-562018: {Name:mk6a7956e4fce5d8e0f4d6fe039ab67ad6cd688b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 14:49:32.396254 1297065 start.go:364] duration metric: took 40.319µs to acquireMachinesLock for "functional-562018"
	I1213 14:49:32.396277 1297065 start.go:96] Skipping create...Using existing machine configuration
	I1213 14:49:32.396287 1297065 fix.go:54] fixHost starting: 
	I1213 14:49:32.396543 1297065 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:49:32.413077 1297065 fix.go:112] recreateIfNeeded on functional-562018: state=Running err=<nil>
	W1213 14:49:32.413105 1297065 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 14:49:32.416298 1297065 out.go:252] * Updating the running docker "functional-562018" container ...
	I1213 14:49:32.416337 1297065 machine.go:94] provisionDockerMachine start ...
	I1213 14:49:32.416434 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:32.434428 1297065 main.go:143] libmachine: Using SSH client type: native
	I1213 14:49:32.434755 1297065 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:49:32.434764 1297065 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 14:49:32.588560 1297065 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-562018
	
	I1213 14:49:32.588587 1297065 ubuntu.go:182] provisioning hostname "functional-562018"
	I1213 14:49:32.588651 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:32.607983 1297065 main.go:143] libmachine: Using SSH client type: native
	I1213 14:49:32.608286 1297065 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:49:32.608297 1297065 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-562018 && echo "functional-562018" | sudo tee /etc/hostname
	I1213 14:49:32.769183 1297065 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-562018
	
	I1213 14:49:32.769274 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:32.789428 1297065 main.go:143] libmachine: Using SSH client type: native
	I1213 14:49:32.789750 1297065 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:49:32.789773 1297065 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-562018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-562018/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-562018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 14:49:32.943886 1297065 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 14:49:32.943914 1297065 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 14:49:32.943934 1297065 ubuntu.go:190] setting up certificates
	I1213 14:49:32.943953 1297065 provision.go:84] configureAuth start
	I1213 14:49:32.944016 1297065 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
	I1213 14:49:32.962011 1297065 provision.go:143] copyHostCerts
	I1213 14:49:32.962065 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 14:49:32.962109 1297065 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 14:49:32.962123 1297065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 14:49:32.962204 1297065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 14:49:32.962309 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 14:49:32.962331 1297065 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 14:49:32.962339 1297065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 14:49:32.962367 1297065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 14:49:32.962422 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 14:49:32.962443 1297065 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 14:49:32.962451 1297065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 14:49:32.962476 1297065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 14:49:32.962539 1297065 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.functional-562018 san=[127.0.0.1 192.168.49.2 functional-562018 localhost minikube]
	I1213 14:49:33.179564 1297065 provision.go:177] copyRemoteCerts
	I1213 14:49:33.179638 1297065 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 14:49:33.179690 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.200012 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.307268 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 14:49:33.307352 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 14:49:33.325080 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 14:49:33.325187 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 14:49:33.348055 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 14:49:33.348124 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 14:49:33.368733 1297065 provision.go:87] duration metric: took 424.756928ms to configureAuth
	I1213 14:49:33.368776 1297065 ubuntu.go:206] setting minikube options for container-runtime
	I1213 14:49:33.368958 1297065 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:49:33.368972 1297065 machine.go:97] duration metric: took 952.628419ms to provisionDockerMachine
	I1213 14:49:33.368979 1297065 start.go:293] postStartSetup for "functional-562018" (driver="docker")
	I1213 14:49:33.368990 1297065 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 14:49:33.369043 1297065 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 14:49:33.369100 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.388800 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.495227 1297065 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 14:49:33.498339 1297065 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 14:49:33.498360 1297065 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 14:49:33.498365 1297065 command_runner.go:130] > VERSION_ID="12"
	I1213 14:49:33.498369 1297065 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 14:49:33.498374 1297065 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 14:49:33.498378 1297065 command_runner.go:130] > ID=debian
	I1213 14:49:33.498382 1297065 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 14:49:33.498387 1297065 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 14:49:33.498400 1297065 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 14:49:33.498729 1297065 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 14:49:33.498752 1297065 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 14:49:33.498764 1297065 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 14:49:33.498818 1297065 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 14:49:33.498907 1297065 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 14:49:33.498914 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> /etc/ssl/certs/12529342.pem
	I1213 14:49:33.498991 1297065 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts -> hosts in /etc/test/nested/copy/1252934
	I1213 14:49:33.498996 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts -> /etc/test/nested/copy/1252934/hosts
	I1213 14:49:33.499038 1297065 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1252934
	I1213 14:49:33.506503 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 14:49:33.524063 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts --> /etc/test/nested/copy/1252934/hosts (40 bytes)
	I1213 14:49:33.542234 1297065 start.go:296] duration metric: took 173.238726ms for postStartSetup
	I1213 14:49:33.542347 1297065 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 14:49:33.542395 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.560689 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.668283 1297065 command_runner.go:130] > 18%
	I1213 14:49:33.668429 1297065 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 14:49:33.673015 1297065 command_runner.go:130] > 160G
	I1213 14:49:33.673516 1297065 fix.go:56] duration metric: took 1.277224674s for fixHost
	I1213 14:49:33.673545 1297065 start.go:83] releasing machines lock for "functional-562018", held for 1.277279647s
	I1213 14:49:33.673651 1297065 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
	I1213 14:49:33.691077 1297065 ssh_runner.go:195] Run: cat /version.json
	I1213 14:49:33.691140 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.691468 1297065 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 14:49:33.691538 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.709148 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.719417 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.814811 1297065 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 14:49:33.814943 1297065 ssh_runner.go:195] Run: systemctl --version
	I1213 14:49:33.903672 1297065 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 14:49:33.906947 1297065 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 14:49:33.906982 1297065 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 14:49:33.907055 1297065 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 14:49:33.911546 1297065 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 14:49:33.911590 1297065 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 14:49:33.911661 1297065 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 14:49:33.919539 1297065 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 14:49:33.919560 1297065 start.go:496] detecting cgroup driver to use...
	I1213 14:49:33.919591 1297065 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 14:49:33.919652 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 14:49:33.935466 1297065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 14:49:33.948503 1297065 docker.go:218] disabling cri-docker service (if available) ...
	I1213 14:49:33.948565 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 14:49:33.964251 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 14:49:33.977532 1297065 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 14:49:34.098935 1297065 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 14:49:34.240532 1297065 docker.go:234] disabling docker service ...
	I1213 14:49:34.240643 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 14:49:34.257037 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 14:49:34.270650 1297065 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 14:49:34.390022 1297065 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 14:49:34.521564 1297065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 14:49:34.535848 1297065 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 14:49:34.549721 1297065 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1213 14:49:34.551043 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 14:49:34.560293 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 14:49:34.569539 1297065 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 14:49:34.569607 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 14:49:34.578725 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 14:49:34.587464 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 14:49:34.595867 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 14:49:34.604914 1297065 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 14:49:34.612837 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 14:49:34.621746 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 14:49:34.631405 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 14:49:34.640934 1297065 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 14:49:34.647949 1297065 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 14:49:34.649110 1297065 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 14:49:34.656959 1297065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:49:34.763520 1297065 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 14:49:34.891785 1297065 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 14:49:34.891886 1297065 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 14:49:34.896000 1297065 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1213 14:49:34.896045 1297065 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 14:49:34.896074 1297065 command_runner.go:130] > Device: 0,72	Inode: 1612        Links: 1
	I1213 14:49:34.896088 1297065 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 14:49:34.896099 1297065 command_runner.go:130] > Access: 2025-12-13 14:49:34.846323232 +0000
	I1213 14:49:34.896109 1297065 command_runner.go:130] > Modify: 2025-12-13 14:49:34.846323232 +0000
	I1213 14:49:34.896114 1297065 command_runner.go:130] > Change: 2025-12-13 14:49:34.846323232 +0000
	I1213 14:49:34.896117 1297065 command_runner.go:130] >  Birth: -
	I1213 14:49:34.896860 1297065 start.go:564] Will wait 60s for crictl version
	I1213 14:49:34.896947 1297065 ssh_runner.go:195] Run: which crictl
	I1213 14:49:34.901248 1297065 command_runner.go:130] > /usr/local/bin/crictl
	I1213 14:49:34.901933 1297065 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 14:49:34.925912 1297065 command_runner.go:130] > Version:  0.1.0
	I1213 14:49:34.925937 1297065 command_runner.go:130] > RuntimeName:  containerd
	I1213 14:49:34.925943 1297065 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1213 14:49:34.925948 1297065 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 14:49:34.928438 1297065 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 14:49:34.928554 1297065 ssh_runner.go:195] Run: containerd --version
	I1213 14:49:34.949487 1297065 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 14:49:34.951799 1297065 ssh_runner.go:195] Run: containerd --version
	I1213 14:49:34.970090 1297065 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 14:49:34.977895 1297065 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 14:49:34.980777 1297065 cli_runner.go:164] Run: docker network inspect functional-562018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 14:49:34.997091 1297065 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 14:49:35.003196 1297065 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 14:49:35.003415 1297065 kubeadm.go:884] updating cluster {Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 14:49:35.003575 1297065 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 14:49:35.003657 1297065 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:49:35.028469 1297065 command_runner.go:130] > {
	I1213 14:49:35.028488 1297065 command_runner.go:130] >   "images":  [
	I1213 14:49:35.028493 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028502 1297065 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 14:49:35.028509 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028514 1297065 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 14:49:35.028518 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028522 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028533 1297065 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 14:49:35.028536 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028541 1297065 command_runner.go:130] >       "size":  "40636774",
	I1213 14:49:35.028545 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028549 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028552 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028555 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028563 1297065 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 14:49:35.028567 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028572 1297065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 14:49:35.028574 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028583 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028592 1297065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 14:49:35.028595 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028599 1297065 command_runner.go:130] >       "size":  "8034419",
	I1213 14:49:35.028603 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028607 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028610 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028613 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028620 1297065 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 14:49:35.028624 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028630 1297065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 14:49:35.028633 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028641 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028649 1297065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 14:49:35.028652 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028656 1297065 command_runner.go:130] >       "size":  "21168808",
	I1213 14:49:35.028660 1297065 command_runner.go:130] >       "username":  "nonroot",
	I1213 14:49:35.028664 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028667 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028670 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028677 1297065 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 14:49:35.028680 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028685 1297065 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 14:49:35.028688 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028691 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028698 1297065 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 14:49:35.028701 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028706 1297065 command_runner.go:130] >       "size":  "21136588",
	I1213 14:49:35.028710 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.028714 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.028717 1297065 command_runner.go:130] >       },
	I1213 14:49:35.028721 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028725 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028731 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028734 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028741 1297065 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 14:49:35.028745 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028750 1297065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 14:49:35.028753 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028757 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028764 1297065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 14:49:35.028768 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028772 1297065 command_runner.go:130] >       "size":  "24678359",
	I1213 14:49:35.028775 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.028783 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.028786 1297065 command_runner.go:130] >       },
	I1213 14:49:35.028790 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028794 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028797 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028799 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028806 1297065 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 14:49:35.028809 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028815 1297065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 14:49:35.028818 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028822 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028829 1297065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 14:49:35.028833 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028837 1297065 command_runner.go:130] >       "size":  "20661043",
	I1213 14:49:35.028841 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.028844 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.028847 1297065 command_runner.go:130] >       },
	I1213 14:49:35.028852 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028855 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028858 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028861 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028867 1297065 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 14:49:35.028877 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028883 1297065 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 14:49:35.028886 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028890 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028897 1297065 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 14:49:35.028900 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028905 1297065 command_runner.go:130] >       "size":  "22429671",
	I1213 14:49:35.028908 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028912 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028915 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028919 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028926 1297065 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 14:49:35.028929 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028934 1297065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 14:49:35.028937 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028941 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028948 1297065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 14:49:35.028951 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028955 1297065 command_runner.go:130] >       "size":  "15391364",
	I1213 14:49:35.028959 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.028962 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.028965 1297065 command_runner.go:130] >       },
	I1213 14:49:35.028969 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028972 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028975 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028978 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028984 1297065 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 14:49:35.028987 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028992 1297065 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 14:49:35.028995 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028998 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.029005 1297065 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 14:49:35.029009 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.029016 1297065 command_runner.go:130] >       "size":  "267939",
	I1213 14:49:35.029019 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.029023 1297065 command_runner.go:130] >         "value":  "65535"
	I1213 14:49:35.029030 1297065 command_runner.go:130] >       },
	I1213 14:49:35.029034 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.029037 1297065 command_runner.go:130] >       "pinned":  true
	I1213 14:49:35.029040 1297065 command_runner.go:130] >     }
	I1213 14:49:35.029043 1297065 command_runner.go:130] >   ]
	I1213 14:49:35.029046 1297065 command_runner.go:130] > }
	I1213 14:49:35.031562 1297065 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 14:49:35.031587 1297065 containerd.go:534] Images already preloaded, skipping extraction
	I1213 14:49:35.031647 1297065 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:49:35.054892 1297065 command_runner.go:130] > {
	I1213 14:49:35.054913 1297065 command_runner.go:130] >   "images":  [
	I1213 14:49:35.054918 1297065 command_runner.go:130] >     {
	I1213 14:49:35.054928 1297065 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 14:49:35.054933 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.054939 1297065 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 14:49:35.054943 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.054947 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.054959 1297065 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 14:49:35.054966 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.054970 1297065 command_runner.go:130] >       "size":  "40636774",
	I1213 14:49:35.054977 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.054982 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.054993 1297065 command_runner.go:130] >     },
	I1213 14:49:35.054996 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055014 1297065 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 14:49:35.055021 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055030 1297065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 14:49:35.055033 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055037 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055045 1297065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 14:49:35.055049 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055053 1297065 command_runner.go:130] >       "size":  "8034419",
	I1213 14:49:35.055057 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055060 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055064 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055067 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055074 1297065 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 14:49:35.055081 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055086 1297065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 14:49:35.055092 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055104 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055117 1297065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 14:49:35.055121 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055125 1297065 command_runner.go:130] >       "size":  "21168808",
	I1213 14:49:35.055135 1297065 command_runner.go:130] >       "username":  "nonroot",
	I1213 14:49:35.055139 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055143 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055151 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055158 1297065 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 14:49:35.055162 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055169 1297065 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 14:49:35.055173 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055177 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055187 1297065 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 14:49:35.055193 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055201 1297065 command_runner.go:130] >       "size":  "21136588",
	I1213 14:49:35.055205 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055210 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.055217 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055221 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055225 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055231 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055234 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055241 1297065 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 14:49:35.055246 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055254 1297065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 14:49:35.055257 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055261 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055272 1297065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 14:49:35.055278 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055283 1297065 command_runner.go:130] >       "size":  "24678359",
	I1213 14:49:35.055286 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055294 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.055300 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055304 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055329 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055335 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055339 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055346 1297065 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 14:49:35.055352 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055358 1297065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 14:49:35.055371 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055375 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055383 1297065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 14:49:35.055388 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055392 1297065 command_runner.go:130] >       "size":  "20661043",
	I1213 14:49:35.055399 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055403 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.055410 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055415 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055422 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055425 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055428 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055435 1297065 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 14:49:35.055446 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055452 1297065 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 14:49:35.055455 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055460 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055469 1297065 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 14:49:35.055477 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055482 1297065 command_runner.go:130] >       "size":  "22429671",
	I1213 14:49:35.055486 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055494 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055497 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055500 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055511 1297065 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 14:49:35.055515 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055524 1297065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 14:49:35.055529 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055533 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055541 1297065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 14:49:35.055547 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055551 1297065 command_runner.go:130] >       "size":  "15391364",
	I1213 14:49:35.055554 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055559 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.055564 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055568 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055574 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055578 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055581 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055587 1297065 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 14:49:35.055595 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055602 1297065 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 14:49:35.055608 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055612 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055620 1297065 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 14:49:35.055626 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055630 1297065 command_runner.go:130] >       "size":  "267939",
	I1213 14:49:35.055633 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055637 1297065 command_runner.go:130] >         "value":  "65535"
	I1213 14:49:35.055651 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055655 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055659 1297065 command_runner.go:130] >       "pinned":  true
	I1213 14:49:35.055662 1297065 command_runner.go:130] >     }
	I1213 14:49:35.055666 1297065 command_runner.go:130] >   ]
	I1213 14:49:35.055669 1297065 command_runner.go:130] > }
	I1213 14:49:35.057995 1297065 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 14:49:35.058021 1297065 cache_images.go:86] Images are preloaded, skipping loading
	I1213 14:49:35.058031 1297065 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 14:49:35.058154 1297065 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-562018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 14:49:35.058232 1297065 ssh_runner.go:195] Run: sudo crictl info
	I1213 14:49:35.082362 1297065 command_runner.go:130] > {
	I1213 14:49:35.082385 1297065 command_runner.go:130] >   "cniconfig": {
	I1213 14:49:35.082391 1297065 command_runner.go:130] >     "Networks": [
	I1213 14:49:35.082395 1297065 command_runner.go:130] >       {
	I1213 14:49:35.082401 1297065 command_runner.go:130] >         "Config": {
	I1213 14:49:35.082405 1297065 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1213 14:49:35.082411 1297065 command_runner.go:130] >           "Name": "cni-loopback",
	I1213 14:49:35.082415 1297065 command_runner.go:130] >           "Plugins": [
	I1213 14:49:35.082419 1297065 command_runner.go:130] >             {
	I1213 14:49:35.082423 1297065 command_runner.go:130] >               "Network": {
	I1213 14:49:35.082427 1297065 command_runner.go:130] >                 "ipam": {},
	I1213 14:49:35.082432 1297065 command_runner.go:130] >                 "type": "loopback"
	I1213 14:49:35.082436 1297065 command_runner.go:130] >               },
	I1213 14:49:35.082446 1297065 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1213 14:49:35.082450 1297065 command_runner.go:130] >             }
	I1213 14:49:35.082457 1297065 command_runner.go:130] >           ],
	I1213 14:49:35.082467 1297065 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1213 14:49:35.082473 1297065 command_runner.go:130] >         },
	I1213 14:49:35.082488 1297065 command_runner.go:130] >         "IFName": "lo"
	I1213 14:49:35.082495 1297065 command_runner.go:130] >       }
	I1213 14:49:35.082498 1297065 command_runner.go:130] >     ],
	I1213 14:49:35.082503 1297065 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1213 14:49:35.082507 1297065 command_runner.go:130] >     "PluginDirs": [
	I1213 14:49:35.082511 1297065 command_runner.go:130] >       "/opt/cni/bin"
	I1213 14:49:35.082516 1297065 command_runner.go:130] >     ],
	I1213 14:49:35.082520 1297065 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1213 14:49:35.082527 1297065 command_runner.go:130] >     "Prefix": "eth"
	I1213 14:49:35.082530 1297065 command_runner.go:130] >   },
	I1213 14:49:35.082533 1297065 command_runner.go:130] >   "config": {
	I1213 14:49:35.082537 1297065 command_runner.go:130] >     "cdiSpecDirs": [
	I1213 14:49:35.082544 1297065 command_runner.go:130] >       "/etc/cdi",
	I1213 14:49:35.082549 1297065 command_runner.go:130] >       "/var/run/cdi"
	I1213 14:49:35.082552 1297065 command_runner.go:130] >     ],
	I1213 14:49:35.082559 1297065 command_runner.go:130] >     "cni": {
	I1213 14:49:35.082562 1297065 command_runner.go:130] >       "binDir": "",
	I1213 14:49:35.082566 1297065 command_runner.go:130] >       "binDirs": [
	I1213 14:49:35.082570 1297065 command_runner.go:130] >         "/opt/cni/bin"
	I1213 14:49:35.082573 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.082578 1297065 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1213 14:49:35.082581 1297065 command_runner.go:130] >       "confTemplate": "",
	I1213 14:49:35.082586 1297065 command_runner.go:130] >       "ipPref": "",
	I1213 14:49:35.082589 1297065 command_runner.go:130] >       "maxConfNum": 1,
	I1213 14:49:35.082593 1297065 command_runner.go:130] >       "setupSerially": false,
	I1213 14:49:35.082601 1297065 command_runner.go:130] >       "useInternalLoopback": false
	I1213 14:49:35.082604 1297065 command_runner.go:130] >     },
	I1213 14:49:35.082611 1297065 command_runner.go:130] >     "containerd": {
	I1213 14:49:35.082617 1297065 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1213 14:49:35.082622 1297065 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1213 14:49:35.082629 1297065 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1213 14:49:35.082634 1297065 command_runner.go:130] >       "runtimes": {
	I1213 14:49:35.082637 1297065 command_runner.go:130] >         "runc": {
	I1213 14:49:35.082648 1297065 command_runner.go:130] >           "ContainerAnnotations": null,
	I1213 14:49:35.082654 1297065 command_runner.go:130] >           "PodAnnotations": null,
	I1213 14:49:35.082659 1297065 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1213 14:49:35.082672 1297065 command_runner.go:130] >           "cgroupWritable": false,
	I1213 14:49:35.082676 1297065 command_runner.go:130] >           "cniConfDir": "",
	I1213 14:49:35.082680 1297065 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1213 14:49:35.082684 1297065 command_runner.go:130] >           "io_type": "",
	I1213 14:49:35.082688 1297065 command_runner.go:130] >           "options": {
	I1213 14:49:35.082693 1297065 command_runner.go:130] >             "BinaryName": "",
	I1213 14:49:35.082699 1297065 command_runner.go:130] >             "CriuImagePath": "",
	I1213 14:49:35.082703 1297065 command_runner.go:130] >             "CriuWorkPath": "",
	I1213 14:49:35.082707 1297065 command_runner.go:130] >             "IoGid": 0,
	I1213 14:49:35.082714 1297065 command_runner.go:130] >             "IoUid": 0,
	I1213 14:49:35.082719 1297065 command_runner.go:130] >             "NoNewKeyring": false,
	I1213 14:49:35.082725 1297065 command_runner.go:130] >             "Root": "",
	I1213 14:49:35.082729 1297065 command_runner.go:130] >             "ShimCgroup": "",
	I1213 14:49:35.082743 1297065 command_runner.go:130] >             "SystemdCgroup": false
	I1213 14:49:35.082746 1297065 command_runner.go:130] >           },
	I1213 14:49:35.082751 1297065 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1213 14:49:35.082758 1297065 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1213 14:49:35.082765 1297065 command_runner.go:130] >           "runtimePath": "",
	I1213 14:49:35.082769 1297065 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1213 14:49:35.082774 1297065 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1213 14:49:35.082778 1297065 command_runner.go:130] >           "snapshotter": ""
	I1213 14:49:35.082784 1297065 command_runner.go:130] >         }
	I1213 14:49:35.082787 1297065 command_runner.go:130] >       }
	I1213 14:49:35.082790 1297065 command_runner.go:130] >     },
	I1213 14:49:35.082801 1297065 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1213 14:49:35.082809 1297065 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1213 14:49:35.082816 1297065 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1213 14:49:35.082820 1297065 command_runner.go:130] >     "disableApparmor": false,
	I1213 14:49:35.082825 1297065 command_runner.go:130] >     "disableHugetlbController": true,
	I1213 14:49:35.082832 1297065 command_runner.go:130] >     "disableProcMount": false,
	I1213 14:49:35.082839 1297065 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1213 14:49:35.082845 1297065 command_runner.go:130] >     "enableCDI": true,
	I1213 14:49:35.082850 1297065 command_runner.go:130] >     "enableSelinux": false,
	I1213 14:49:35.082857 1297065 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1213 14:49:35.082862 1297065 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1213 14:49:35.082866 1297065 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1213 14:49:35.082871 1297065 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1213 14:49:35.082875 1297065 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1213 14:49:35.082880 1297065 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1213 14:49:35.082887 1297065 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1213 14:49:35.082893 1297065 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1213 14:49:35.082904 1297065 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1213 14:49:35.082910 1297065 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1213 14:49:35.082915 1297065 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1213 14:49:35.082926 1297065 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1213 14:49:35.082932 1297065 command_runner.go:130] >   },
	I1213 14:49:35.082936 1297065 command_runner.go:130] >   "features": {
	I1213 14:49:35.082943 1297065 command_runner.go:130] >     "supplemental_groups_policy": true
	I1213 14:49:35.082946 1297065 command_runner.go:130] >   },
	I1213 14:49:35.082950 1297065 command_runner.go:130] >   "golang": "go1.24.9",
	I1213 14:49:35.082959 1297065 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 14:49:35.082976 1297065 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 14:49:35.082980 1297065 command_runner.go:130] >   "runtimeHandlers": [
	I1213 14:49:35.082984 1297065 command_runner.go:130] >     {
	I1213 14:49:35.082988 1297065 command_runner.go:130] >       "features": {
	I1213 14:49:35.083000 1297065 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 14:49:35.083004 1297065 command_runner.go:130] >         "user_namespaces": true
	I1213 14:49:35.083008 1297065 command_runner.go:130] >       }
	I1213 14:49:35.083012 1297065 command_runner.go:130] >     },
	I1213 14:49:35.083017 1297065 command_runner.go:130] >     {
	I1213 14:49:35.083021 1297065 command_runner.go:130] >       "features": {
	I1213 14:49:35.083026 1297065 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 14:49:35.083033 1297065 command_runner.go:130] >         "user_namespaces": true
	I1213 14:49:35.083041 1297065 command_runner.go:130] >       },
	I1213 14:49:35.083055 1297065 command_runner.go:130] >       "name": "runc"
	I1213 14:49:35.083058 1297065 command_runner.go:130] >     }
	I1213 14:49:35.083061 1297065 command_runner.go:130] >   ],
	I1213 14:49:35.083064 1297065 command_runner.go:130] >   "status": {
	I1213 14:49:35.083068 1297065 command_runner.go:130] >     "conditions": [
	I1213 14:49:35.083077 1297065 command_runner.go:130] >       {
	I1213 14:49:35.083081 1297065 command_runner.go:130] >         "message": "",
	I1213 14:49:35.083085 1297065 command_runner.go:130] >         "reason": "",
	I1213 14:49:35.083089 1297065 command_runner.go:130] >         "status": true,
	I1213 14:49:35.083098 1297065 command_runner.go:130] >         "type": "RuntimeReady"
	I1213 14:49:35.083104 1297065 command_runner.go:130] >       },
	I1213 14:49:35.083107 1297065 command_runner.go:130] >       {
	I1213 14:49:35.083113 1297065 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1213 14:49:35.083118 1297065 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1213 14:49:35.083122 1297065 command_runner.go:130] >         "status": false,
	I1213 14:49:35.083128 1297065 command_runner.go:130] >         "type": "NetworkReady"
	I1213 14:49:35.083132 1297065 command_runner.go:130] >       },
	I1213 14:49:35.083135 1297065 command_runner.go:130] >       {
	I1213 14:49:35.083160 1297065 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1213 14:49:35.083171 1297065 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1213 14:49:35.083176 1297065 command_runner.go:130] >         "status": false,
	I1213 14:49:35.083182 1297065 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1213 14:49:35.083186 1297065 command_runner.go:130] >       }
	I1213 14:49:35.083190 1297065 command_runner.go:130] >     ]
	I1213 14:49:35.083196 1297065 command_runner.go:130] >   }
	I1213 14:49:35.083199 1297065 command_runner.go:130] > }
	I1213 14:49:35.086343 1297065 cni.go:84] Creating CNI manager for ""
	I1213 14:49:35.086370 1297065 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 14:49:35.086397 1297065 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 14:49:35.086420 1297065 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-562018 NodeName:functional-562018 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 14:49:35.086540 1297065 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-562018"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 14:49:35.086621 1297065 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 14:49:35.094718 1297065 command_runner.go:130] > kubeadm
	I1213 14:49:35.094739 1297065 command_runner.go:130] > kubectl
	I1213 14:49:35.094743 1297065 command_runner.go:130] > kubelet
	I1213 14:49:35.094761 1297065 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 14:49:35.094814 1297065 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 14:49:35.102589 1297065 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 14:49:35.115905 1297065 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 14:49:35.129462 1297065 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 14:49:35.142335 1297065 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 14:49:35.146161 1297065 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 14:49:35.146280 1297065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:49:35.271079 1297065 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:49:35.585791 1297065 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018 for IP: 192.168.49.2
	I1213 14:49:35.585864 1297065 certs.go:195] generating shared ca certs ...
	I1213 14:49:35.585895 1297065 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:49:35.586063 1297065 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 14:49:35.586138 1297065 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 14:49:35.586175 1297065 certs.go:257] generating profile certs ...
	I1213 14:49:35.586327 1297065 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key
	I1213 14:49:35.586437 1297065 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key.d0505aee
	I1213 14:49:35.586523 1297065 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key
	I1213 14:49:35.586557 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 14:49:35.586602 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 14:49:35.586632 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 14:49:35.586672 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 14:49:35.586707 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 14:49:35.586737 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 14:49:35.586777 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 14:49:35.586811 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 14:49:35.586902 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 14:49:35.586962 1297065 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 14:49:35.586986 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 14:49:35.587046 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 14:49:35.587098 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 14:49:35.587157 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 14:49:35.587232 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 14:49:35.587302 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.587371 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem -> /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.587399 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.588006 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 14:49:35.609077 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 14:49:35.630697 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 14:49:35.652426 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 14:49:35.670342 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 14:49:35.687837 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 14:49:35.705877 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 14:49:35.723466 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 14:49:35.740679 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 14:49:35.758304 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 14:49:35.776736 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 14:49:35.794339 1297065 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 14:49:35.806740 1297065 ssh_runner.go:195] Run: openssl version
	I1213 14:49:35.812461 1297065 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 14:49:35.812883 1297065 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.820227 1297065 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 14:49:35.827978 1297065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.831610 1297065 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.831636 1297065 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.831688 1297065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.871766 1297065 command_runner.go:130] > b5213941
	I1213 14:49:35.872189 1297065 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 14:49:35.879531 1297065 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.886529 1297065 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 14:49:35.894015 1297065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.897550 1297065 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.897859 1297065 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.897930 1297065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.938203 1297065 command_runner.go:130] > 51391683
	I1213 14:49:35.938708 1297065 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 14:49:35.946069 1297065 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.953176 1297065 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 14:49:35.960486 1297065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.964477 1297065 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.964589 1297065 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.964665 1297065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 14:49:36.007360 1297065 command_runner.go:130] > 3ec20f2e
	I1213 14:49:36.007602 1297065 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 14:49:36.019390 1297065 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 14:49:36.024551 1297065 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 14:49:36.024587 1297065 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 14:49:36.024604 1297065 command_runner.go:130] > Device: 259,1	Inode: 2346070     Links: 1
	I1213 14:49:36.024612 1297065 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 14:49:36.024618 1297065 command_runner.go:130] > Access: 2025-12-13 14:45:28.579602026 +0000
	I1213 14:49:36.024623 1297065 command_runner.go:130] > Modify: 2025-12-13 14:41:24.464587226 +0000
	I1213 14:49:36.024628 1297065 command_runner.go:130] > Change: 2025-12-13 14:41:24.464587226 +0000
	I1213 14:49:36.024634 1297065 command_runner.go:130] >  Birth: 2025-12-13 14:41:24.464587226 +0000
	I1213 14:49:36.024743 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 14:49:36.067430 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.067964 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 14:49:36.109753 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.110299 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 14:49:36.151650 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.152123 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 14:49:36.199598 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.200366 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 14:49:36.241923 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.242478 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 14:49:36.282927 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.283387 1297065 kubeadm.go:401] StartCluster: {Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:49:36.283480 1297065 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 14:49:36.283586 1297065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 14:49:36.308975 1297065 cri.go:89] found id: ""
	I1213 14:49:36.309092 1297065 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 14:49:36.316103 1297065 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 14:49:36.316129 1297065 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 14:49:36.316138 1297065 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 14:49:36.317085 1297065 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 14:49:36.317145 1297065 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 14:49:36.317231 1297065 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 14:49:36.324724 1297065 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:49:36.325158 1297065 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-562018" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:36.325271 1297065 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-1251074/kubeconfig needs updating (will repair): [kubeconfig missing "functional-562018" cluster setting kubeconfig missing "functional-562018" context setting]
	I1213 14:49:36.325603 1297065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:49:36.326011 1297065 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:36.326154 1297065 kapi.go:59] client config for functional-562018: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key", CAFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 14:49:36.326701 1297065 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 14:49:36.326719 1297065 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 14:49:36.326724 1297065 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 14:49:36.326733 1297065 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 14:49:36.326744 1297065 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 14:49:36.327001 1297065 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 14:49:36.327093 1297065 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 14:49:36.334496 1297065 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 14:49:36.334531 1297065 kubeadm.go:602] duration metric: took 17.366177ms to restartPrimaryControlPlane
	I1213 14:49:36.334540 1297065 kubeadm.go:403] duration metric: took 51.160034ms to StartCluster
	I1213 14:49:36.334555 1297065 settings.go:142] acquiring lock: {Name:mk3e38eb9635dd950ddf8081cc0598a8c10049c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:49:36.334613 1297065 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:36.335214 1297065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:49:36.335450 1297065 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 14:49:36.335789 1297065 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:49:36.335866 1297065 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 14:49:36.335932 1297065 addons.go:70] Setting storage-provisioner=true in profile "functional-562018"
	I1213 14:49:36.335945 1297065 addons.go:239] Setting addon storage-provisioner=true in "functional-562018"
	I1213 14:49:36.335975 1297065 host.go:66] Checking if "functional-562018" exists ...
	I1213 14:49:36.336461 1297065 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:49:36.336835 1297065 addons.go:70] Setting default-storageclass=true in profile "functional-562018"
	I1213 14:49:36.336857 1297065 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-562018"
	I1213 14:49:36.337151 1297065 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:49:36.340699 1297065 out.go:179] * Verifying Kubernetes components...
	I1213 14:49:36.343477 1297065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:49:36.374082 1297065 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 14:49:36.376797 1297065 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:36.376892 1297065 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:36.376917 1297065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 14:49:36.376979 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:36.377245 1297065 kapi.go:59] client config for functional-562018: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key", CAFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 14:49:36.377532 1297065 addons.go:239] Setting addon default-storageclass=true in "functional-562018"
	I1213 14:49:36.377566 1297065 host.go:66] Checking if "functional-562018" exists ...
	I1213 14:49:36.377992 1297065 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:49:36.415567 1297065 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:36.415590 1297065 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 14:49:36.415656 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:36.416969 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:36.442534 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:36.534721 1297065 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:49:36.592567 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:36.600370 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:37.335898 1297065 node_ready.go:35] waiting up to 6m0s for node "functional-562018" to be "Ready" ...
	I1213 14:49:37.335934 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:37.336074 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.336106 1297065 retry.go:31] will retry after 199.574589ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.336165 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:37.336178 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.336184 1297065 retry.go:31] will retry after 285.216803ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.336272 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:37.336359 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:37.336679 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:37.536000 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:37.591050 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:37.594766 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.594797 1297065 retry.go:31] will retry after 489.410948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.621926 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:37.677113 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:37.681307 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.681342 1297065 retry.go:31] will retry after 401.770697ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.836587 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:37.836683 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:37.837004 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:38.083592 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:38.085139 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:38.190416 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:38.194296 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.194326 1297065 retry.go:31] will retry after 757.686696ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.207705 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:38.207792 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.207830 1297065 retry.go:31] will retry after 505.194475ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.337091 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:38.337186 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:38.337548 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:38.714015 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:38.783498 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:38.783559 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.783593 1297065 retry.go:31] will retry after 988.219406ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.836722 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:38.836873 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:38.837238 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:38.952600 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:39.020705 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:39.020749 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:39.020768 1297065 retry.go:31] will retry after 1.072702638s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:39.337164 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:39.337235 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:39.337545 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:39.337593 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:39.772102 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:39.836685 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:39.836850 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:39.837201 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:39.843566 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:39.843633 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:39.843675 1297065 retry.go:31] will retry after 1.296209829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:40.093780 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:40.156222 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:40.156329 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:40.156372 1297065 retry.go:31] will retry after 965.768616ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:40.336552 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:40.336651 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:40.336970 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:40.836822 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:40.836895 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:40.837217 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:41.122779 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:41.140323 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:41.215097 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:41.215182 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:41.215214 1297065 retry.go:31] will retry after 2.369565148s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:41.219568 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:41.219636 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:41.219656 1297065 retry.go:31] will retry after 2.455142313s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:41.336947 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:41.337019 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:41.337416 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:41.837047 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:41.837124 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:41.837388 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:41.837438 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:42.337111 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:42.337201 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:42.337621 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:42.836363 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:42.836444 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:42.836803 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:43.336226 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:43.336304 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:43.336552 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:43.585084 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:43.645189 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:43.649081 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:43.649137 1297065 retry.go:31] will retry after 3.995275361s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:43.675423 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:43.738811 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:43.738856 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:43.738876 1297065 retry.go:31] will retry after 3.319355388s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:43.837038 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:43.837127 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:43.837467 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:43.837521 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:44.336215 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:44.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:44.336669 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:44.836348 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:44.836422 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:44.836715 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:45.336277 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:45.336359 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:45.336715 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:45.836385 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:45.836482 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:45.836839 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:46.336842 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:46.336917 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:46.337174 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:46.337224 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:46.836567 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:46.836641 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:46.837050 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:47.058405 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:47.140540 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:47.144585 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:47.144615 1297065 retry.go:31] will retry after 3.814662677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:47.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:47.336338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:47.336673 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:47.645178 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:47.704569 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:47.708191 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:47.708226 1297065 retry.go:31] will retry after 4.571128182s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:47.836452 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:47.836522 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:47.836787 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:48.336260 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:48.336350 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:48.336628 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:48.836239 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:48.836318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:48.836676 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:48.836743 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:49.336220 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:49.336290 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:49.336531 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:49.836286 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:49.836362 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:49.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:50.336380 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:50.336455 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:50.336799 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:50.836292 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:50.836366 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:50.836651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:50.960127 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:51.026705 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:51.026749 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:51.026767 1297065 retry.go:31] will retry after 9.152833031s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:51.336157 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:51.336252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:51.336592 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:51.336645 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:51.836328 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:51.836439 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:51.836752 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:52.280634 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:52.336265 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:52.336348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:52.336649 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:52.351151 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:52.351191 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:52.351210 1297065 retry.go:31] will retry after 6.806315756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:52.837084 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:52.837176 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:52.837503 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:53.336231 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:53.336347 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:53.336679 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:53.336735 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:53.836278 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:53.836358 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:53.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:54.336375 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:54.336453 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:54.336787 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:54.836534 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:54.836609 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:54.836960 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:55.336530 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:55.336608 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:55.336965 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:55.337034 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:55.836817 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:55.836889 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:55.837215 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:56.337019 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:56.337095 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:56.337433 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:56.836157 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:56.836242 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:56.836511 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:57.336450 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:57.336529 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:57.336899 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:57.836240 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:57.836331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:57.836629 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:57.836681 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:58.336184 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:58.336276 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:58.336593 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:58.836286 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:58.836386 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:58.836728 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:59.158224 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:59.216557 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:59.216609 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:59.216627 1297065 retry.go:31] will retry after 13.782587086s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:59.336899 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:59.336976 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:59.337309 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:59.837050 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:59.837117 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:59.837393 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:59.837436 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:00.179978 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:50:00.336210 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:00.336301 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:00.337482 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 14:50:00.358964 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:00.359008 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:00.359030 1297065 retry.go:31] will retry after 12.357990487s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:00.836789 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:00.836882 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:00.837203 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:01.336532 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:01.336609 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:01.336921 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:01.836255 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:01.836341 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:01.836652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:02.336512 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:02.336592 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:02.336956 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:02.337013 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:02.836530 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:02.836611 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:02.836888 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:03.336279 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:03.336358 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:03.336701 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:03.836401 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:03.836474 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:03.836845 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:04.336380 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:04.336454 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:04.336784 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:04.836248 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:04.836328 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:04.836660 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:04.836716 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:05.336407 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:05.336490 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:05.336806 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:05.836467 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:05.836548 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:05.836865 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:06.336870 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:06.336953 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:06.337350 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:06.837024 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:06.837097 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:06.837419 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:06.837478 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:07.336416 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:07.336488 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:07.336747 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:07.836490 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:07.836567 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:07.836906 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:08.336625 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:08.336699 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:08.337020 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:08.836518 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:08.836588 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:08.836865 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:09.336612 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:09.336692 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:09.337049 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:09.337109 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:09.836858 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:09.836939 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:09.837272 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:10.337051 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:10.337125 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:10.337387 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:10.837153 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:10.837234 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:10.837582 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:11.336183 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:11.336261 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:11.336582 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:11.836188 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:11.836256 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:11.836517 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:11.836567 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:12.336515 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:12.336595 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:12.336916 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:12.717305 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:50:12.775348 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:12.775393 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:12.775414 1297065 retry.go:31] will retry after 16.474515121s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:12.836567 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:12.836650 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:12.837019 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:13.000372 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:50:13.059399 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:13.063613 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:13.063652 1297065 retry.go:31] will retry after 8.071550656s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:13.336122 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:13.336199 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:13.336467 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:13.836136 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:13.836218 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:13.836591 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:13.836660 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:14.336363 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:14.336438 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:14.336774 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:14.836188 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:14.836261 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:14.836540 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:15.336277 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:15.336353 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:15.336677 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:15.836219 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:15.836293 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:15.836588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:16.336546 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:16.336617 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:16.336864 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:16.336904 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:16.836265 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:16.836348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:16.836648 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:17.336586 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:17.336661 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:17.337008 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:17.836520 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:17.836585 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:17.836858 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:18.336257 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:18.336354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:18.336715 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:18.836428 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:18.836502 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:18.836842 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:18.836899 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:19.336242 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:19.336311 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:19.336563 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:19.836231 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:19.836306 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:19.836619 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:20.336334 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:20.336416 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:20.336797 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:20.836189 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:20.836268 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:20.836618 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:21.136217 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:50:21.193283 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:21.196963 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:21.196996 1297065 retry.go:31] will retry after 15.530830741s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:21.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:21.336329 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:21.336627 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:21.336677 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:21.836352 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:21.836433 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:21.836751 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:22.336543 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:22.336615 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:22.336948 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:22.836275 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:22.836348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:22.836696 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:23.336400 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:23.336482 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:23.336828 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:23.336887 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:23.836198 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:23.836296 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:23.836661 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:24.336254 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:24.336329 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:24.336641 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:24.836327 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:24.836403 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:24.836743 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:25.336278 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:25.336354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:25.336703 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:25.836226 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:25.836309 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:25.836679 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:25.836740 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:26.337200 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:26.337293 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:26.337628 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:26.836405 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:26.836480 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:26.836777 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:27.336562 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:27.336653 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:27.337005 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:27.836230 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:27.836307 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:27.836657 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:28.336177 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:28.336267 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:28.336587 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:28.336638 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:28.836250 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:28.836332 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:28.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:29.250199 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:50:29.308318 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:29.311716 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:29.311747 1297065 retry.go:31] will retry after 30.463725654s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:29.336999 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:29.337080 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:29.337458 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:29.836155 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:29.836222 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:29.836520 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:30.336243 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:30.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:30.336620 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:30.336669 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:30.836228 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:30.836325 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:30.836623 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:31.336183 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:31.336285 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:31.336588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:31.836269 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:31.836340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:31.836687 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:32.336490 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:32.336568 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:32.336902 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:32.336957 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:32.836192 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:32.836262 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:32.836598 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:33.336250 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:33.336349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:33.336707 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:33.836245 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:33.836323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:33.836664 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:34.336184 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:34.336253 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:34.336535 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:34.836284 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:34.836360 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:34.836791 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:34.836848 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:35.336516 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:35.336596 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:35.336938 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:35.836527 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:35.836598 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:35.836864 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:36.336942 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:36.337020 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:36.337342 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:36.728993 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:50:36.785078 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:36.788836 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:36.788868 1297065 retry.go:31] will retry after 31.693829046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:36.837069 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:36.837145 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:36.837461 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:36.837513 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:37.336193 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:37.336260 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:37.336549 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:37.836234 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:37.836312 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:37.836628 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:38.336250 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:38.336329 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:38.336672 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:38.836242 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:38.836323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:38.836588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:39.336229 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:39.336304 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:39.336650 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:39.336709 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:39.836236 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:39.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:39.836618 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:40.336280 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:40.336355 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:40.336614 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:40.836272 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:40.836382 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:40.836789 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:41.336524 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:41.336601 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:41.336927 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:41.336987 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:41.836201 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:41.836278 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:41.836626 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:42.336633 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:42.336716 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:42.337072 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:42.836881 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:42.836955 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:42.837306 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:43.337071 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:43.337144 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:43.337415 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:43.337468 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:43.836983 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:43.837056 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:43.837412 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:44.336153 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:44.336229 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:44.336573 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:44.836267 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:44.836356 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:44.836695 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:45.336450 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:45.336538 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:45.336949 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:45.836752 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:45.836829 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:45.837178 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:45.837235 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:46.336981 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:46.337060 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:46.337351 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:46.836213 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:46.836319 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:46.836733 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:47.336523 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:47.336680 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:47.336969 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:47.836511 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:47.836579 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:47.836844 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:48.336215 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:48.336310 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:48.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:48.336704 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:48.836371 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:48.836487 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:48.836832 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:49.336188 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:49.336255 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:49.336544 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:49.836263 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:49.836365 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:49.836653 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:50.336392 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:50.336468 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:50.336808 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:50.336866 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:50.836325 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:50.836439 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:50.836713 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:51.336252 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:51.336346 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:51.336691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:51.836280 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:51.836353 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:51.836671 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:52.336550 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:52.336630 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:52.336893 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:52.336943 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:52.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:52.836322 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:52.836667 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:53.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:53.336342 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:53.336699 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:53.836191 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:53.836264 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:53.836543 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:54.336246 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:54.336345 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:54.336700 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:54.836402 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:54.836475 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:54.836811 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:54.836869 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:55.336360 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:55.336437 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:55.336727 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:55.836432 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:55.836512 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:55.836850 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:56.337034 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:56.337132 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:56.337451 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:56.836142 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:56.836214 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:56.836473 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:57.336469 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:57.336554 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:57.336893 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:57.336949 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:57.836297 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:57.836381 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:57.836714 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:58.336385 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:58.336465 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:58.336743 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:58.836460 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:58.836541 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:58.836889 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:59.336265 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:59.336340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:59.336697 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:59.776318 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:50:59.836157 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:59.836232 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:59.836466 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:59.836509 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:59.839555 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:59.839592 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:59.839611 1297065 retry.go:31] will retry after 31.022889465s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:51:00.336284 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:00.336385 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:00.337017 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:00.836870 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:00.836951 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:00.837274 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:01.337018 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:01.337093 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:01.337377 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:01.836106 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:01.836178 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:01.836532 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:01.836591 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:02.336582 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:02.336658 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:02.336989 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:02.836525 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:02.836602 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:02.836897 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:03.336270 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:03.336349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:03.336732 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:03.836448 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:03.836526 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:03.836864 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:03.836920 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:04.336555 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:04.336630 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:04.336888 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:04.836543 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:04.836644 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:04.836971 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:05.336771 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:05.336847 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:05.337186 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:05.836528 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:05.836603 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:05.836858 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:06.336901 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:06.336978 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:06.337275 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:06.337322 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:06.836616 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:06.836698 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:06.837028 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:07.336511 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:07.336579 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:07.336834 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:07.836242 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:07.836317 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:07.836644 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:08.336229 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:08.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:08.336668 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:08.482933 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:51:08.546772 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:51:08.546820 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:51:08.546914 1297065 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 14:51:08.836114 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:08.836184 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:08.836454 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:08.836495 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:09.336176 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:09.336252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:09.336597 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:09.836346 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:09.836424 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:09.836727 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:10.336174 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:10.336265 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:10.336548 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:10.836166 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:10.836272 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:10.836571 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:10.836621 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:11.336180 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:11.336265 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:11.336582 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:11.836217 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:11.836300 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:11.836626 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:12.336568 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:12.336663 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:12.336970 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:12.836801 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:12.836879 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:12.837247 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:12.837301 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:13.336980 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:13.337062 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:13.337320 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:13.837125 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:13.837211 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:13.837540 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:14.336301 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:14.336390 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:14.336757 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:14.836166 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:14.836241 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:14.836499 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:15.336228 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:15.336300 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:15.336648 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:15.336706 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:15.836385 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:15.836461 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:15.836787 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:16.336816 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:16.336889 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:16.337169 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:16.836948 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:16.837028 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:16.837350 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:17.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:17.336319 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:17.336641 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:17.836172 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:17.836252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:17.836555 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:17.836606 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:18.336236 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:18.336313 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:18.336641 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:18.836373 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:18.836452 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:18.836760 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:19.336167 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:19.336238 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:19.336538 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:19.836221 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:19.836297 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:19.836617 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:19.836675 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:20.336339 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:20.336412 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:20.336771 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:20.836180 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:20.836251 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:20.836567 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:21.336259 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:21.336338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:21.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:21.836380 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:21.836462 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:21.836800 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:21.836855 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:22.336522 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:22.336591 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:22.336867 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:22.836547 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:22.836626 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:22.836957 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:23.336750 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:23.336825 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:23.337141 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:23.836507 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:23.836582 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:23.836841 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:23.836883 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:24.336607 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:24.336681 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:24.337016 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:24.836840 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:24.836916 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:24.837240 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:25.336547 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:25.336619 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:25.336933 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:25.836630 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:25.836712 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:25.837049 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:25.837104 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:26.337004 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:26.337079 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:26.337406 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:26.836128 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:26.836203 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:26.836467 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:27.336400 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:27.336476 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:27.336833 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:27.836240 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:27.836325 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:27.836680 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:28.336379 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:28.336452 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:28.336710 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:28.336750 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:28.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:28.836333 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:28.836661 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:29.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:29.336331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:29.336705 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:29.836347 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:29.836422 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:29.836690 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:30.336280 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:30.336351 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:30.336706 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:30.836402 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:30.836499 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:30.836836 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:30.836891 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:30.863046 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:51:30.922204 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:51:30.922247 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:51:30.922363 1297065 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 14:51:30.925463 1297065 out.go:179] * Enabled addons: 
	I1213 14:51:30.929007 1297065 addons.go:530] duration metric: took 1m54.593151344s for enable addons: enabled=[]
	I1213 14:51:31.336478 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:31.336574 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:31.336911 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:31.836663 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:31.836742 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:31.837400 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:32.336285 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:32.336362 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:32.337832 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 14:51:32.836218 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:32.836313 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:32.836623 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:33.336237 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:33.336311 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:33.336634 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:33.336688 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:33.836221 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:33.836296 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:33.836630 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:34.336182 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:34.336256 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:34.336569 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:34.836237 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:34.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:34.836646 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:35.336246 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:35.336327 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:35.336677 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:35.336739 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:35.836381 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:35.836450 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:35.836754 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:36.336847 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:36.336928 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:36.337255 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:36.836533 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:36.836613 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:36.836939 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:37.336507 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:37.336573 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:37.336839 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:37.336879 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:37.836228 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:37.836301 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:37.836594 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:38.336263 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:38.336341 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:38.336675 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:38.836200 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:38.836285 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:38.836811 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:39.336276 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:39.336373 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:39.336728 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:39.836259 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:39.836349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:39.836684 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:39.836742 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:40.336215 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:40.336295 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:40.336618 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:40.836435 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:40.836524 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:40.836905 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:41.336775 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:41.336851 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:41.337175 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:41.836559 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:41.836631 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:41.836894 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:41.836936 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:42.336658 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:42.336748 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:42.337128 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:42.836909 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:42.836987 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:42.837289 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:43.337040 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:43.337127 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:43.337474 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:43.836192 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:43.836275 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:43.836624 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:44.336291 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:44.336388 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:44.336784 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:44.336841 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:44.836180 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:44.836254 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:44.836551 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:45.336321 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:45.336400 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:45.336690 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:45.836435 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:45.836510 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:45.836833 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:46.336779 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:46.336848 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:46.337141 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:46.337201 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:46.836518 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:46.836596 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:46.836935 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:47.336893 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:47.336971 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:47.337308 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:47.836529 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:47.836614 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:47.836876 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:48.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:48.336337 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:48.336692 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:48.836415 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:48.836494 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:48.836834 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:48.836892 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:49.336543 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:49.336621 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:49.336897 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:49.836323 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:49.836400 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:49.836703 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:50.336284 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:50.336361 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:50.336695 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:50.836346 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:50.836424 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:50.836742 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:51.336225 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:51.336303 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:51.336650 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:51.336709 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:51.836373 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:51.836452 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:51.836792 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:52.336469 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:52.336538 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:52.336793 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:52.836252 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:52.836345 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:52.836678 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:53.336269 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:53.336349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:53.336675 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:53.336740 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:53.836126 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:53.836205 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:53.836462 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:54.336204 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:54.336277 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:54.336594 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:54.836224 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:54.836301 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:54.836659 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:55.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:55.336331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:55.336616 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:55.836312 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:55.836389 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:55.836728 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:55.836782 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:56.336654 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:56.336732 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:56.337071 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:56.836525 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:56.836605 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:56.836887 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:57.336719 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:57.336796 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:57.337143 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:57.836841 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:57.836920 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:57.837247 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:57.837302 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:58.337040 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:58.337110 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:58.337364 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:58.837119 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:58.837198 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:58.837538 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:59.336279 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:59.336367 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:59.336734 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:59.836438 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:59.836511 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:59.836774 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:00.355395 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:00.355523 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:00.355852 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:00.355945 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:00.836731 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:00.836813 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:00.837145 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:01.336514 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:01.336595 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:01.336898 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:01.836757 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:01.836832 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:01.837174 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:02.336946 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:02.337023 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:02.337363 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:02.836523 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:02.836599 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:02.836906 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:02.836965 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:03.336199 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:03.336271 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:03.336598 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:03.836313 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:03.836395 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:03.836725 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:04.336141 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:04.336218 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:04.336472 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:04.836200 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:04.836276 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:04.836590 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:05.336247 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:05.336324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:05.336654 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:05.336712 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:05.836185 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:05.836257 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:05.836570 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:06.336596 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:06.336670 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:06.337028 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:06.836851 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:06.836932 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:06.837278 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:07.337039 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:07.337104 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:07.337364 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:07.337404 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:07.837188 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:07.837264 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:07.837630 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:08.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:08.336324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:08.336644 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:08.836195 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:08.836269 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:08.836588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:09.336284 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:09.336374 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:09.336691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:09.836403 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:09.836488 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:09.836831 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:09.836885 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:10.336187 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:10.336264 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:10.336588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:10.836238 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:10.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:10.836652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:11.336250 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:11.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:11.336683 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:11.836362 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:11.836437 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:11.836693 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:12.336616 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:12.336691 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:12.337039 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:12.337098 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:12.836854 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:12.836931 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:12.837269 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:13.337012 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:13.337077 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:13.337331 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:13.837136 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:13.837214 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:13.837562 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:14.336275 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:14.336353 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:14.336653 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:14.836184 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:14.836257 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:14.836550 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:14.836598 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:15.336241 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:15.336321 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:15.336652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:15.836388 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:15.836477 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:15.836811 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:16.336837 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:16.336907 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:16.337175 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:16.836969 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:16.837065 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:16.837433 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:16.837491 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:17.336248 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:17.336323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:17.336684 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:17.836232 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:17.836298 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:17.836601 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:18.336212 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:18.336289 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:18.336616 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:18.836403 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:18.836489 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:18.836838 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:19.336195 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:19.336269 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:19.336594 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:19.336650 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:19.836346 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:19.836429 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:19.836796 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:20.336507 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:20.336589 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:20.336899 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:20.836181 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:20.836258 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:20.836517 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:21.336228 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:21.336302 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:21.336632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:21.336692 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:21.836262 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:21.836338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:21.836657 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:22.336526 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:22.336604 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:22.336882 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:22.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:22.836317 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:22.836632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:23.336264 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:23.336373 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:23.336709 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:23.336768 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:23.836406 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:23.836477 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:23.836791 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:24.336241 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:24.336336 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:24.336674 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:24.836391 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:24.836474 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:24.836800 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:25.336188 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:25.336256 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:25.336595 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:25.836360 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:25.836444 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:25.836782 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:25.836842 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:26.336659 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:26.336742 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:26.337133 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:26.836533 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:26.836602 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:26.836915 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:27.336718 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:27.336789 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:27.337149 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:27.836949 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:27.837024 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:27.837383 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:27.837440 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:28.337164 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:28.337233 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:28.337486 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:28.836203 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:28.836284 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:28.836624 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:29.336359 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:29.336444 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:29.336786 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:29.836473 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:29.836542 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:29.836800 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:30.336266 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:30.336346 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:30.336712 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:30.336778 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:30.836448 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:30.836530 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:30.836895 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:31.336594 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:31.336667 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:31.336934 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:31.836251 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:31.836334 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:31.836670 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:32.336445 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:32.336545 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:32.336826 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:32.336874 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:32.836183 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:32.836254 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:32.836608 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:33.336221 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:33.336296 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:33.336654 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:33.836240 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:33.836319 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:33.836658 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:34.336330 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:34.336399 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:34.336664 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:34.836346 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:34.836426 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:34.836772 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:34.836831 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:35.336328 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:35.336410 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:35.336774 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:35.836188 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:35.836261 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:35.836582 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:36.336650 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:36.336733 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:36.337068 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:36.836880 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:36.836955 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:36.837277 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:36.837337 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:37.336192 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:37.336266 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:37.336525 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:37.836226 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:37.836309 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:37.836638 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:38.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:38.336322 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:38.336672 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:38.836202 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:38.836273 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:38.836547 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:39.336280 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:39.336358 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:39.336701 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:39.336754 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:39.836426 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:39.836508 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:39.836821 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:40.336191 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:40.336263 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:40.336564 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:40.836260 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:40.836361 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:40.836721 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:41.336424 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:41.336505 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:41.336831 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:41.336888 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:41.836209 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:41.836299 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:41.836584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:42.336696 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:42.336785 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:42.337191 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:42.836996 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:42.837071 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:42.837403 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:43.336118 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:43.336196 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:43.336449 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:43.836158 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:43.836243 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:43.836549 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:43.836602 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:44.336242 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:44.336324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:44.336613 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:44.836191 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:44.836266 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:44.836521 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:45.336296 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:45.336373 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:45.336712 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:45.836269 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:45.836353 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:45.836712 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:45.836772 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:46.336576 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:46.336646 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:46.336952 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:46.836262 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:46.836354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:46.836668 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:47.336578 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:47.336658 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:47.336990 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:47.836518 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:47.836598 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:47.836865 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:47.836918 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:48.336636 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:48.336714 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:48.337035 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:48.836837 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:48.836909 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:48.837235 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:49.336532 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:49.336609 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:49.336874 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:49.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:49.836320 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:49.836663 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:50.336257 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:50.336343 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:50.336683 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:50.336737 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:50.836244 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:50.836318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:50.836588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:51.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:51.336340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:51.336658 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:51.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:51.836323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:51.836652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:52.336454 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:52.336534 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:52.336820 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:52.336867 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:52.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:52.836323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:52.836674 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:53.336385 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:53.336470 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:53.336787 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:53.836193 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:53.836269 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:53.836583 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:54.336266 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:54.336348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:54.336708 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:54.836271 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:54.836348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:54.836719 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:54.836775 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:55.336414 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:55.336481 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:55.336738 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:55.836424 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:55.836502 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:55.836840 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:56.336926 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:56.337006 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:56.337393 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:56.837161 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:56.837240 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:56.837514 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:56.837556 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:57.336486 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:57.336562 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:57.336904 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:57.836247 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:57.836326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:57.836646 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:58.336169 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:58.336261 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:58.336585 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:58.836253 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:58.836338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:58.836686 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:59.336405 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:59.336488 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:59.336818 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:59.336881 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:59.836205 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:59.836279 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:59.836602 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:00.336348 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:00.336434 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:00.336755 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:00.836458 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:00.836538 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:00.836919 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:01.336481 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:01.336559 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:01.336870 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:01.336917 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:01.836269 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:01.836349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:01.836651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:02.336510 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:02.336585 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:02.336875 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:02.836559 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:02.836633 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:02.836887 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:03.336237 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:03.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:03.336652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:03.836262 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:03.836343 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:03.836681 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:03.836743 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:04.336178 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:04.336263 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:04.336579 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:04.836312 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:04.836395 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:04.836768 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:05.336328 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:05.336405 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:05.336722 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:05.836169 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:05.836249 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:05.836532 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:06.337061 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:06.337133 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:06.337448 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:06.337510 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:06.836170 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:06.836312 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:06.836648 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:07.336505 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:07.336579 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:07.336834 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:07.836162 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:07.836243 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:07.836604 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:08.336414 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:08.336492 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:08.336833 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:08.836389 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:08.836459 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:08.836768 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:08.836825 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:09.336249 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:09.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:09.336671 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:09.836385 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:09.836463 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:09.836810 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:10.336515 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:10.336589 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:10.336857 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:10.836237 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:10.836310 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:10.836668 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:11.336409 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:11.336502 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:11.336898 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:11.336954 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:11.836193 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:11.836273 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:11.836553 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:12.336497 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:12.336582 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:12.336916 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:12.836273 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:12.836346 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:12.836677 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:13.336363 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:13.336435 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:13.336727 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:13.836260 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:13.836336 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:13.836645 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:13.836693 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:14.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:14.336320 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:14.336647 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:14.836180 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:14.836273 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:14.836579 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:15.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:15.336342 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:15.336712 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:15.836446 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:15.836528 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:15.836857 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:15.836911 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:16.336886 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:16.336953 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:16.337211 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:16.836970 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:16.837043 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:16.837375 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:17.336898 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:17.336971 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:17.337298 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:17.837031 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:17.837110 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:17.837392 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:17.837435 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:18.336966 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:18.337049 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:18.337376 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:18.837166 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:18.837253 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:18.837689 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:19.336193 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:19.336283 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:19.336617 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:19.836252 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:19.836332 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:19.836666 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:20.336399 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:20.336476 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:20.336824 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:20.336877 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:20.836528 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:20.836607 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:20.836879 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:21.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:21.336319 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:21.336627 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:21.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:21.836331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:21.836682 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:22.336425 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:22.336492 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:22.336751 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:22.836221 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:22.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:22.836646 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:22.836701 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:23.336413 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:23.336491 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:23.336832 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:23.836195 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:23.836282 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:23.836590 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:24.336251 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:24.336331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:24.336671 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:24.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:24.836325 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:24.836645 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:25.336331 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:25.336403 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:25.336743 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:25.336792 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:25.836245 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:25.836324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:25.836644 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:26.336605 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:26.336680 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:26.337038 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:26.836509 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:26.836578 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:26.836824 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:27.336452 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:27.336525 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:27.336887 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:27.336942 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:27.836486 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:27.836568 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:27.836917 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:28.336112 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:28.336186 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:28.336563 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:28.836282 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:28.836357 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:28.836713 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:29.336309 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:29.336384 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:29.336723 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:29.836403 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:29.836478 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:29.836733 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:29.836776 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:30.336220 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:30.336298 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:30.336637 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:30.836357 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:30.836431 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:30.836763 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:31.336439 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:31.336532 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:31.336820 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:31.836503 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:31.836582 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:31.836898 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:31.836954 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:32.336893 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:32.336969 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:32.337280 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:32.837017 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:32.837102 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:32.837392 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:33.336206 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:33.336289 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:33.336624 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:33.836226 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:33.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:33.836661 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:34.336143 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:34.336223 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:34.336515 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:34.336566 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:34.836223 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:34.836309 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:34.836660 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:35.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:35.336337 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:35.336768 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:35.836351 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:35.836427 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:35.836683 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:36.336777 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:36.336851 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:36.337168 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:36.337222 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:36.837003 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:36.837084 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:36.837449 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:37.336375 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:37.336445 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:37.336691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:37.836402 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:37.836479 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:37.836826 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:38.336440 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:38.336525 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:38.336860 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:38.836259 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:38.836339 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:38.836606 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:38.836659 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:39.336506 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:39.336596 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:39.337235 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:39.836335 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:39.836421 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:39.836794 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:40.336522 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:40.336587 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:40.336888 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:40.836592 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:40.836674 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:40.837021 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:40.837076 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:41.336578 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:41.336655 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:41.336975 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:41.836525 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:41.836604 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:41.836959 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:42.336767 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:42.336851 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:42.337172 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:42.836977 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:42.837055 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:42.837406 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:42.837463 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:43.336096 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:43.336165 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:43.336522 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:43.836216 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:43.836315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:43.836677 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:44.336275 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:44.336366 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:44.336718 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:44.836181 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:44.836246 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:44.836531 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:45.336294 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:45.336384 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:45.336759 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:45.336815 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:45.836495 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:45.836571 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:45.836902 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:46.336923 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:46.336991 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:46.337248 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:46.836581 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:46.836658 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:46.836955 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:47.336876 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:47.336959 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:47.337291 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:47.337349 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:47.837127 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:47.837195 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:47.837512 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:48.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:48.336347 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:48.336704 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:48.836213 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:48.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:48.836647 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:49.336183 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:49.336258 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:49.336584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:49.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:49.836330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:49.836652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:49.836707 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:50.336396 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:50.336475 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:50.336802 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:50.836181 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:50.836252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:50.836524 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:51.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:51.336340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:51.336661 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:51.836254 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:51.836339 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:51.836673 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:51.836731 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:52.336439 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:52.336508 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:52.336813 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:52.836552 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:52.836646 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:52.837037 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:53.336867 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:53.336943 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:53.337248 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:53.836529 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:53.836600 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:53.836882 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:53.836925 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:54.336730 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:54.336804 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:54.337142 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:54.836954 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:54.837030 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:54.837375 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:55.337104 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:55.337186 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:55.337475 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:55.836190 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:55.836268 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:55.836616 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:56.336432 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:56.336515 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:56.336847 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:56.336900 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:56.836181 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:56.836260 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:56.836553 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:57.336500 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:57.336575 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:57.336934 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:57.836737 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:57.836827 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:57.837184 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:58.336550 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:58.336646 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:58.336966 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:58.337018 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:58.836741 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:58.836828 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:58.837162 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:59.336945 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:59.337026 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:59.337378 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:59.836973 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:59.837043 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:59.837302 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:00.337185 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:00.337285 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:00.337926 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:00.338025 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:00.836242 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:00.836326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:00.836691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:01.336224 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:01.336316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:01.336589 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:01.836224 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:01.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:01.836636 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:02.336530 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:02.336607 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:02.336904 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:02.836600 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:02.836677 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:02.837015 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:02.837082 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:03.336835 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:03.336910 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:03.337276 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:03.837094 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:03.837170 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:03.837522 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:04.336212 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:04.336283 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:04.336559 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:04.836246 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:04.836354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:04.836699 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:05.336254 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:05.336341 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:05.336691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:05.336745 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:05.836209 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:05.836288 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:05.836622 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:06.336695 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:06.336783 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:06.337108 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:06.836892 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:06.836966 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:06.837308 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:07.336123 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:07.336192 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:07.336465 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:07.836757 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:07.836832 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:07.837160 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:07.837217 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:08.336959 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:08.337035 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:08.337354 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:08.837047 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:08.837117 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:08.837375 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:09.336797 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:09.336876 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:09.337176 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:09.836976 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:09.837060 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:09.837357 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:09.837405 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:10.337145 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:10.337219 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:10.337522 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:10.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:10.836324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:10.836624 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:11.336249 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:11.336335 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:11.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:11.836251 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:11.836329 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:11.836584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:12.336557 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:12.336629 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:12.336964 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:12.337021 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:12.836792 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:12.836867 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:12.837180 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:13.336499 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:13.336580 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:13.336912 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:13.836538 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:13.836617 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:13.836932 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:14.336207 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:14.336299 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:14.336627 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:14.836329 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:14.836410 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:14.836729 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:14.836786 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:15.336238 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:15.336371 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:15.336658 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:15.836347 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:15.836425 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:15.836765 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:16.336570 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:16.336641 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:16.336896 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:16.836234 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:16.836313 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:16.836636 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:17.336499 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:17.336578 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:17.336890 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:17.336950 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:17.836161 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:17.836245 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:17.836561 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:18.336250 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:18.336345 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:18.336673 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:18.836422 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:18.836499 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:18.836856 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:19.336539 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:19.336609 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:19.336871 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:19.836236 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:19.836313 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:19.836657 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:19.836712 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:20.336398 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:20.336479 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:20.336829 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:20.836244 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:20.836336 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:20.836642 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:21.336253 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:21.336338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:21.336707 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:21.836309 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:21.836398 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:21.836758 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:21.836814 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:22.336543 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:22.336624 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:22.336925 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:22.836625 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:22.836707 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:22.837057 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:23.336641 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:23.336724 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:23.337073 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:23.836556 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:23.836634 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:23.836903 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:23.836945 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:24.336275 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:24.336357 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:24.336645 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:24.836238 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:24.836349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:24.836732 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:25.336455 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:25.336529 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:25.336850 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:25.836272 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:25.836354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:25.836679 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:26.336762 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:26.336843 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:26.337194 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:26.337248 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:26.836530 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:26.836634 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:26.836949 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:27.337082 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:27.337168 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:27.337523 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:27.836265 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:27.836347 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:27.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:28.336192 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:28.336265 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:28.336562 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:28.836214 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:28.836315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:28.836676 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:28.836744 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:29.336414 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:29.336563 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:29.336947 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:29.836198 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:29.836288 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:29.836614 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:30.336242 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:30.336318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:30.336656 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:30.836210 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:30.836286 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:30.836612 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:31.336280 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:31.336362 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:31.336639 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:31.336684 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:31.836265 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:31.836343 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:31.836692 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:32.336488 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:32.336567 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:32.336863 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:32.836173 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:32.836265 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:32.836578 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:33.336248 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:33.336322 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:33.336687 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:33.336748 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:33.836273 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:33.836362 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:33.836704 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:34.336406 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:34.336478 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:34.336748 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:34.836467 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:34.836551 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:34.836857 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:35.336588 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:35.336668 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:35.337027 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:35.337086 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:35.836514 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:35.836585 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:35.836913 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:36.336967 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:36.337041 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:36.337376 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:36.837202 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:36.837285 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:36.837584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:37.336502 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:37.336591 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:37.336896 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:37.836591 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:37.836694 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:37.837046 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:37.837115 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:38.336899 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:38.336971 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:38.337328 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:38.837050 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:38.837126 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:38.837404 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:39.336160 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:39.336232 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:39.336580 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:39.836289 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:39.836382 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:39.836703 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:40.336371 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:40.336443 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:40.336707 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:40.336759 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:40.836239 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:40.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:40.836655 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:41.336240 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:41.336320 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:41.336686 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:41.836231 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:41.836305 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:41.836611 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:42.336623 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:42.336717 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:42.337080 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:42.337132 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:42.836776 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:42.836862 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:42.837178 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:43.336512 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:43.336586 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:43.336846 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:43.836233 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:43.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:43.836685 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:44.336266 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:44.336339 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:44.336641 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:44.836273 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:44.836339 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:44.836623 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:44.836676 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:45.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:45.336326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:45.336676 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:45.836517 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:45.836597 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:45.836920 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:46.336894 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:46.336967 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:46.337224 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:46.837014 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:46.837094 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:46.837437 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:46.837490 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:47.336253 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:47.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:47.336670 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:47.836235 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:47.836302 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:47.836610 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:48.336235 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:48.336318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:48.336671 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:48.836257 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:48.836337 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:48.836678 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:49.336349 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:49.336431 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:49.336732 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:49.336821 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:49.836214 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:49.836293 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:49.836634 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:50.336226 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:50.336307 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:50.336635 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:50.836333 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:50.836410 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:50.836688 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:51.336241 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:51.336326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:51.336678 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:51.836396 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:51.836479 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:51.836771 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:51.836817 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:52.336516 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:52.336593 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:52.336852 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:52.836252 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:52.836340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:52.836773 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:53.336515 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:53.336595 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:53.336935 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:53.836510 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:53.836587 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:53.836851 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:53.836896 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:54.336367 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:54.336467 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:54.336808 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:54.836267 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:54.836343 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:54.836686 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:55.336171 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:55.336242 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:55.336562 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:55.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:55.836320 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:55.836689 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:56.336624 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:56.336725 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:56.337092 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:56.337153 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:56.836464 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:56.836539 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:56.836794 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:57.336513 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:57.336595 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:57.336897 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:57.836221 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:57.836300 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:57.836664 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:58.336100 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:58.336175 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:58.336496 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:58.836220 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:58.836305 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:58.836646 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:58.836706 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:59.336458 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:59.336535 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:59.336905 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:59.836288 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:59.836366 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:59.836722 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:00.336435 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:00.336516 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:00.336842 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:00.836803 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:00.836881 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:00.837232 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:00.837290 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:01.336546 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:01.336620 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:01.336919 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:01.836631 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:01.836717 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:01.837061 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:02.336921 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:02.337000 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:02.337379 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:02.837188 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:02.837257 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:02.837522 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:02.837565 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:03.336219 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:03.336301 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:03.336654 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:03.836248 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:03.836325 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:03.836635 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:04.336178 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:04.336251 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:04.336567 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:04.836232 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:04.836318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:04.836669 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:05.336242 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:05.336317 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:05.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:05.336713 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:05.836366 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:05.836448 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:05.836735 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:06.336637 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:06.336720 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:06.337074 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:06.836743 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:06.836817 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:06.837172 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:07.336998 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:07.337074 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:07.337343 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:07.337395 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:07.837167 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:07.837242 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:07.837584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:08.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:08.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:08.336647 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:08.836178 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:08.836256 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:08.836598 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:09.336224 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:09.336297 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:09.336632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:09.836244 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:09.836321 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:09.836675 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:09.836731 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:10.336173 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:10.336248 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:10.336521 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:10.836203 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:10.836283 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:10.836642 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:11.336345 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:11.336437 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:11.336802 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:11.836493 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:11.836567 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:11.836846 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:11.836897 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:12.336745 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:12.336822 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:12.337164 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:12.836822 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:12.836903 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:12.837329 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:13.337068 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:13.337137 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:13.337477 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:13.836207 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:13.836286 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:13.836668 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:14.336241 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:14.336327 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:14.336632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:14.336679 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:14.836300 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:14.836375 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:14.836649 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:15.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:15.336332 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:15.336669 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:15.836251 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:15.836333 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:15.836683 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:16.336651 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:16.336729 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:16.337093 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:16.337145 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:16.836909 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:16.836992 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:16.837356 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:17.336137 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:17.336212 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:17.336571 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:17.836247 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:17.836315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:17.836610 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:18.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:18.336326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:18.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:18.836230 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:18.836305 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:18.836647 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:18.836705 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:19.336364 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:19.336454 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:19.336797 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:19.836215 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:19.836288 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:19.836625 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:20.336325 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:20.336403 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:20.336754 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:20.836274 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:20.836352 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:20.836687 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:20.836744 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:21.336273 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:21.336350 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:21.336700 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:21.836398 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:21.836482 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:21.836816 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:22.336510 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:22.336583 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:22.336841 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:22.836211 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:22.836292 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:22.836650 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:23.336238 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:23.336314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:23.336696 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:23.336754 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:23.836429 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:23.836502 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:23.836789 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:24.336496 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:24.336580 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:24.336961 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:24.836574 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:24.836650 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:24.836988 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:25.336500 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:25.336566 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:25.336817 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:25.336861 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:25.836628 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:25.836709 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:25.837047 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:26.337039 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:26.337121 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:26.337470 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:26.836162 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:26.836244 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:26.836581 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:27.336591 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:27.336672 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:27.337011 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:27.337065 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:27.836601 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:27.836681 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:27.837000 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:28.336523 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:28.336604 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:28.336874 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:28.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:28.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:28.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:29.336385 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:29.336497 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:29.336839 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:29.836183 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:29.836257 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:29.836558 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:29.836608 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:30.336289 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:30.336367 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:30.336681 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:30.836223 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:30.836310 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:30.836632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:31.336179 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:31.336247 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:31.336520 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:31.836214 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:31.836288 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:31.836631 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:31.836685 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:32.336406 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:32.336490 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:32.336839 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:32.836185 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:32.836252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:32.836552 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:33.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:33.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:33.336778 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:33.836367 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:33.836492 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:33.836841 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:33.836899 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:34.336602 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:34.336672 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:34.336962 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:34.836466 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:34.836542 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:34.836843 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:35.336251 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:35.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:35.336690 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:35.836215 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:35.836283 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:35.836600 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:36.336641 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:36.336716 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:36.337095 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:36.337155 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:36.836776 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:36.836857 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:36.837203 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:37.337030 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:37.337151 1297065 node_ready.go:38] duration metric: took 6m0.001157945s for node "functional-562018" to be "Ready" ...
	I1213 14:55:37.340291 1297065 out.go:203] 
	W1213 14:55:37.343143 1297065 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 14:55:37.343162 1297065 out.go:285] * 
	* 
	W1213 14:55:37.345311 1297065 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 14:55:37.348302 1297065 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-562018 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m5.72028575s for "functional-562018" cluster.
I1213 14:55:37.847481 1252934 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-562018
helpers_test.go:244: (dbg) docker inspect functional-562018:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
	        "Created": "2025-12-13T14:41:15.451086653Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1291703,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T14:41:15.527927053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hostname",
	        "HostsPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hosts",
	        "LogPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648-json.log",
	        "Name": "/functional-562018",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-562018:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-562018",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
	                "LowerDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-562018",
	                "Source": "/var/lib/docker/volumes/functional-562018/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-562018",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-562018",
	                "name.minikube.sigs.k8s.io": "functional-562018",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f4b22297a29553cdd0dbc4eaa766abcdb1e67465ee18f1e2f5f9917dc8cf6d08",
	            "SandboxKey": "/var/run/docker/netns/f4b22297a295",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33918"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33919"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33922"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33920"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33921"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-562018": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:f3:95:ff:30:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd2e7aed753870546b4bccdeed8073ad5795c14334e906d06b964f72dc448c38",
	                    "EndpointID": "a0e947c1e40f773105c811b67b7d1d63f19d3a20060380bbde944bf9bfe39be5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-562018",
	                        "2cd1277ca783"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018: exit status 2 (350.499298ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-831661 ssh sudo cat /etc/ssl/certs/12529342.pem                                                                                                      │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:40 UTC │
	│ image          │ functional-831661 image ls                                                                                                                                      │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:40 UTC │
	│ ssh            │ functional-831661 ssh sudo cat /usr/share/ca-certificates/12529342.pem                                                                                          │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:40 UTC │
	│ ssh            │ functional-831661 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image load --daemon kicbase/echo-server:functional-831661 --alsologtostderr                                                                   │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image ls                                                                                                                                      │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image save kicbase/echo-server:functional-831661 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image rm kicbase/echo-server:functional-831661 --alsologtostderr                                                                              │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ update-context │ functional-831661 update-context --alsologtostderr -v=2                                                                                                         │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image ls                                                                                                                                      │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ update-context │ functional-831661 update-context --alsologtostderr -v=2                                                                                                         │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ update-context │ functional-831661 update-context --alsologtostderr -v=2                                                                                                         │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image ls                                                                                                                                      │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image save --daemon kicbase/echo-server:functional-831661 --alsologtostderr                                                                   │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image ls --format json --alsologtostderr                                                                                                      │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image ls --format short --alsologtostderr                                                                                                     │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image ls --format table --alsologtostderr                                                                                                     │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ ssh            │ functional-831661 ssh pgrep buildkitd                                                                                                                           │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │                     │
	│ image          │ functional-831661 image ls --format yaml --alsologtostderr                                                                                                      │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image build -t localhost/my-image:functional-831661 testdata/build --alsologtostderr                                                          │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image ls                                                                                                                                      │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ delete         │ -p functional-831661                                                                                                                                            │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ start          │ -p functional-562018 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0         │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │                     │
	│ start          │ -p functional-562018 --alsologtostderr -v=8                                                                                                                     │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:49 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 14:49:32
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 14:49:32.175934 1297065 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:49:32.176062 1297065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:49:32.176074 1297065 out.go:374] Setting ErrFile to fd 2...
	I1213 14:49:32.176081 1297065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:49:32.176329 1297065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 14:49:32.176775 1297065 out.go:368] Setting JSON to false
	I1213 14:49:32.177662 1297065 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23521,"bootTime":1765613851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 14:49:32.177756 1297065 start.go:143] virtualization:  
	I1213 14:49:32.181250 1297065 out.go:179] * [functional-562018] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 14:49:32.184279 1297065 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 14:49:32.184349 1297065 notify.go:221] Checking for updates...
	I1213 14:49:32.190681 1297065 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:49:32.193733 1297065 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:32.196589 1297065 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 14:49:32.199444 1297065 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 14:49:32.202364 1297065 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 14:49:32.205680 1297065 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:49:32.205788 1297065 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:49:32.233101 1297065 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 14:49:32.233224 1297065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:49:32.299716 1297065 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 14:49:32.290425951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:49:32.299832 1297065 docker.go:319] overlay module found
	I1213 14:49:32.305094 1297065 out.go:179] * Using the docker driver based on existing profile
	I1213 14:49:32.307726 1297065 start.go:309] selected driver: docker
	I1213 14:49:32.307744 1297065 start.go:927] validating driver "docker" against &{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:49:32.307856 1297065 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 14:49:32.307958 1297065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:49:32.364202 1297065 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 14:49:32.354888078 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:49:32.364608 1297065 cni.go:84] Creating CNI manager for ""
	I1213 14:49:32.364673 1297065 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 14:49:32.364721 1297065 start.go:353] cluster config:
	{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:49:32.367887 1297065 out.go:179] * Starting "functional-562018" primary control-plane node in "functional-562018" cluster
	I1213 14:49:32.370579 1297065 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 14:49:32.373599 1297065 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 14:49:32.376553 1297065 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 14:49:32.376606 1297065 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 14:49:32.376621 1297065 cache.go:65] Caching tarball of preloaded images
	I1213 14:49:32.376630 1297065 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 14:49:32.376703 1297065 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 14:49:32.376713 1297065 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 14:49:32.376820 1297065 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/config.json ...
	I1213 14:49:32.396105 1297065 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 14:49:32.396128 1297065 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 14:49:32.396160 1297065 cache.go:243] Successfully downloaded all kic artifacts
	I1213 14:49:32.396191 1297065 start.go:360] acquireMachinesLock for functional-562018: {Name:mk6a7956e4fce5d8e0f4d6fe039ab67ad6cd688b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 14:49:32.396254 1297065 start.go:364] duration metric: took 40.319µs to acquireMachinesLock for "functional-562018"
	I1213 14:49:32.396277 1297065 start.go:96] Skipping create...Using existing machine configuration
	I1213 14:49:32.396287 1297065 fix.go:54] fixHost starting: 
	I1213 14:49:32.396543 1297065 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:49:32.413077 1297065 fix.go:112] recreateIfNeeded on functional-562018: state=Running err=<nil>
	W1213 14:49:32.413105 1297065 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 14:49:32.416298 1297065 out.go:252] * Updating the running docker "functional-562018" container ...
	I1213 14:49:32.416337 1297065 machine.go:94] provisionDockerMachine start ...
	I1213 14:49:32.416434 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:32.434428 1297065 main.go:143] libmachine: Using SSH client type: native
	I1213 14:49:32.434755 1297065 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:49:32.434764 1297065 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 14:49:32.588560 1297065 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-562018
	
	I1213 14:49:32.588587 1297065 ubuntu.go:182] provisioning hostname "functional-562018"
	I1213 14:49:32.588651 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:32.607983 1297065 main.go:143] libmachine: Using SSH client type: native
	I1213 14:49:32.608286 1297065 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:49:32.608297 1297065 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-562018 && echo "functional-562018" | sudo tee /etc/hostname
	I1213 14:49:32.769183 1297065 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-562018
	
	I1213 14:49:32.769274 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:32.789428 1297065 main.go:143] libmachine: Using SSH client type: native
	I1213 14:49:32.789750 1297065 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:49:32.789773 1297065 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-562018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-562018/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-562018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 14:49:32.943886 1297065 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 14:49:32.943914 1297065 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 14:49:32.943934 1297065 ubuntu.go:190] setting up certificates
	I1213 14:49:32.943953 1297065 provision.go:84] configureAuth start
	I1213 14:49:32.944016 1297065 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
	I1213 14:49:32.962011 1297065 provision.go:143] copyHostCerts
	I1213 14:49:32.962065 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 14:49:32.962109 1297065 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 14:49:32.962123 1297065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 14:49:32.962204 1297065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 14:49:32.962309 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 14:49:32.962331 1297065 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 14:49:32.962339 1297065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 14:49:32.962367 1297065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 14:49:32.962422 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 14:49:32.962443 1297065 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 14:49:32.962451 1297065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 14:49:32.962476 1297065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 14:49:32.962539 1297065 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.functional-562018 san=[127.0.0.1 192.168.49.2 functional-562018 localhost minikube]
	I1213 14:49:33.179564 1297065 provision.go:177] copyRemoteCerts
	I1213 14:49:33.179638 1297065 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 14:49:33.179690 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.200012 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.307268 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 14:49:33.307352 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 14:49:33.325080 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 14:49:33.325187 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 14:49:33.348055 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 14:49:33.348124 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 14:49:33.368733 1297065 provision.go:87] duration metric: took 424.756928ms to configureAuth
	I1213 14:49:33.368776 1297065 ubuntu.go:206] setting minikube options for container-runtime
	I1213 14:49:33.368958 1297065 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:49:33.368972 1297065 machine.go:97] duration metric: took 952.628419ms to provisionDockerMachine
	I1213 14:49:33.368979 1297065 start.go:293] postStartSetup for "functional-562018" (driver="docker")
	I1213 14:49:33.368990 1297065 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 14:49:33.369043 1297065 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 14:49:33.369100 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.388800 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.495227 1297065 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 14:49:33.498339 1297065 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 14:49:33.498360 1297065 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 14:49:33.498365 1297065 command_runner.go:130] > VERSION_ID="12"
	I1213 14:49:33.498369 1297065 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 14:49:33.498374 1297065 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 14:49:33.498378 1297065 command_runner.go:130] > ID=debian
	I1213 14:49:33.498382 1297065 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 14:49:33.498387 1297065 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 14:49:33.498400 1297065 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 14:49:33.498729 1297065 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 14:49:33.498752 1297065 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 14:49:33.498764 1297065 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 14:49:33.498818 1297065 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 14:49:33.498907 1297065 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 14:49:33.498914 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> /etc/ssl/certs/12529342.pem
	I1213 14:49:33.498991 1297065 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts -> hosts in /etc/test/nested/copy/1252934
	I1213 14:49:33.498996 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts -> /etc/test/nested/copy/1252934/hosts
	I1213 14:49:33.499038 1297065 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1252934
	I1213 14:49:33.506503 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 14:49:33.524063 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts --> /etc/test/nested/copy/1252934/hosts (40 bytes)
	I1213 14:49:33.542234 1297065 start.go:296] duration metric: took 173.238726ms for postStartSetup
	I1213 14:49:33.542347 1297065 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 14:49:33.542395 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.560689 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.668283 1297065 command_runner.go:130] > 18%
	I1213 14:49:33.668429 1297065 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 14:49:33.673015 1297065 command_runner.go:130] > 160G
	I1213 14:49:33.673516 1297065 fix.go:56] duration metric: took 1.277224674s for fixHost
	I1213 14:49:33.673545 1297065 start.go:83] releasing machines lock for "functional-562018", held for 1.277279647s
	I1213 14:49:33.673651 1297065 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
	I1213 14:49:33.691077 1297065 ssh_runner.go:195] Run: cat /version.json
	I1213 14:49:33.691140 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.691468 1297065 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 14:49:33.691538 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.709148 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.719417 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.814811 1297065 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 14:49:33.814943 1297065 ssh_runner.go:195] Run: systemctl --version
	I1213 14:49:33.903672 1297065 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 14:49:33.906947 1297065 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 14:49:33.906982 1297065 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 14:49:33.907055 1297065 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 14:49:33.911546 1297065 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 14:49:33.911590 1297065 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 14:49:33.911661 1297065 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 14:49:33.919539 1297065 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 14:49:33.919560 1297065 start.go:496] detecting cgroup driver to use...
	I1213 14:49:33.919591 1297065 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 14:49:33.919652 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 14:49:33.935466 1297065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 14:49:33.948503 1297065 docker.go:218] disabling cri-docker service (if available) ...
	I1213 14:49:33.948565 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 14:49:33.964251 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 14:49:33.977532 1297065 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 14:49:34.098935 1297065 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 14:49:34.240532 1297065 docker.go:234] disabling docker service ...
	I1213 14:49:34.240643 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 14:49:34.257037 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 14:49:34.270650 1297065 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 14:49:34.390022 1297065 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 14:49:34.521564 1297065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 14:49:34.535848 1297065 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 14:49:34.549721 1297065 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1213 14:49:34.551043 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 14:49:34.560293 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 14:49:34.569539 1297065 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 14:49:34.569607 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 14:49:34.578725 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 14:49:34.587464 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 14:49:34.595867 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 14:49:34.604914 1297065 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 14:49:34.612837 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 14:49:34.621746 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 14:49:34.631405 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 14:49:34.640934 1297065 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 14:49:34.647949 1297065 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 14:49:34.649110 1297065 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 14:49:34.656959 1297065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:49:34.763520 1297065 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 14:49:34.891785 1297065 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 14:49:34.891886 1297065 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 14:49:34.896000 1297065 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1213 14:49:34.896045 1297065 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 14:49:34.896074 1297065 command_runner.go:130] > Device: 0,72	Inode: 1612        Links: 1
	I1213 14:49:34.896088 1297065 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 14:49:34.896099 1297065 command_runner.go:130] > Access: 2025-12-13 14:49:34.846323232 +0000
	I1213 14:49:34.896109 1297065 command_runner.go:130] > Modify: 2025-12-13 14:49:34.846323232 +0000
	I1213 14:49:34.896114 1297065 command_runner.go:130] > Change: 2025-12-13 14:49:34.846323232 +0000
	I1213 14:49:34.896117 1297065 command_runner.go:130] >  Birth: -
	I1213 14:49:34.896860 1297065 start.go:564] Will wait 60s for crictl version
	I1213 14:49:34.896947 1297065 ssh_runner.go:195] Run: which crictl
	I1213 14:49:34.901248 1297065 command_runner.go:130] > /usr/local/bin/crictl
	I1213 14:49:34.901933 1297065 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 14:49:34.925912 1297065 command_runner.go:130] > Version:  0.1.0
	I1213 14:49:34.925937 1297065 command_runner.go:130] > RuntimeName:  containerd
	I1213 14:49:34.925943 1297065 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1213 14:49:34.925948 1297065 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 14:49:34.928438 1297065 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 14:49:34.928554 1297065 ssh_runner.go:195] Run: containerd --version
	I1213 14:49:34.949487 1297065 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 14:49:34.951799 1297065 ssh_runner.go:195] Run: containerd --version
	I1213 14:49:34.970090 1297065 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 14:49:34.977895 1297065 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 14:49:34.980777 1297065 cli_runner.go:164] Run: docker network inspect functional-562018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 14:49:34.997091 1297065 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 14:49:35.003196 1297065 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 14:49:35.003415 1297065 kubeadm.go:884] updating cluster {Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 14:49:35.003575 1297065 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 14:49:35.003657 1297065 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:49:35.028469 1297065 command_runner.go:130] > {
	I1213 14:49:35.028488 1297065 command_runner.go:130] >   "images":  [
	I1213 14:49:35.028493 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028502 1297065 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 14:49:35.028509 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028514 1297065 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 14:49:35.028518 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028522 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028533 1297065 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 14:49:35.028536 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028541 1297065 command_runner.go:130] >       "size":  "40636774",
	I1213 14:49:35.028545 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028549 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028552 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028555 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028563 1297065 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 14:49:35.028567 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028572 1297065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 14:49:35.028574 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028583 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028592 1297065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 14:49:35.028595 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028599 1297065 command_runner.go:130] >       "size":  "8034419",
	I1213 14:49:35.028603 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028607 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028610 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028613 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028620 1297065 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 14:49:35.028624 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028630 1297065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 14:49:35.028633 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028641 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028649 1297065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 14:49:35.028652 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028656 1297065 command_runner.go:130] >       "size":  "21168808",
	I1213 14:49:35.028660 1297065 command_runner.go:130] >       "username":  "nonroot",
	I1213 14:49:35.028664 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028667 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028670 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028677 1297065 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 14:49:35.028680 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028685 1297065 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 14:49:35.028688 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028691 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028698 1297065 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 14:49:35.028701 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028706 1297065 command_runner.go:130] >       "size":  "21136588",
	I1213 14:49:35.028710 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.028714 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.028717 1297065 command_runner.go:130] >       },
	I1213 14:49:35.028721 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028725 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028731 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028734 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028741 1297065 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 14:49:35.028745 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028750 1297065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 14:49:35.028753 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028757 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028764 1297065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 14:49:35.028768 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028772 1297065 command_runner.go:130] >       "size":  "24678359",
	I1213 14:49:35.028775 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.028783 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.028786 1297065 command_runner.go:130] >       },
	I1213 14:49:35.028790 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028794 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028797 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028799 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028806 1297065 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 14:49:35.028809 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028815 1297065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 14:49:35.028818 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028822 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028829 1297065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 14:49:35.028833 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028837 1297065 command_runner.go:130] >       "size":  "20661043",
	I1213 14:49:35.028841 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.028844 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.028847 1297065 command_runner.go:130] >       },
	I1213 14:49:35.028852 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028855 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028858 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028861 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028867 1297065 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 14:49:35.028877 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028883 1297065 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 14:49:35.028886 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028890 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028897 1297065 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 14:49:35.028900 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028905 1297065 command_runner.go:130] >       "size":  "22429671",
	I1213 14:49:35.028908 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028912 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028915 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028919 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028926 1297065 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 14:49:35.028929 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028934 1297065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 14:49:35.028937 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028941 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028948 1297065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 14:49:35.028951 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028955 1297065 command_runner.go:130] >       "size":  "15391364",
	I1213 14:49:35.028959 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.028962 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.028965 1297065 command_runner.go:130] >       },
	I1213 14:49:35.028969 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028972 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028975 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028978 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028984 1297065 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 14:49:35.028987 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028992 1297065 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 14:49:35.028995 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028998 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.029005 1297065 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 14:49:35.029009 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.029016 1297065 command_runner.go:130] >       "size":  "267939",
	I1213 14:49:35.029019 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.029023 1297065 command_runner.go:130] >         "value":  "65535"
	I1213 14:49:35.029030 1297065 command_runner.go:130] >       },
	I1213 14:49:35.029034 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.029037 1297065 command_runner.go:130] >       "pinned":  true
	I1213 14:49:35.029040 1297065 command_runner.go:130] >     }
	I1213 14:49:35.029043 1297065 command_runner.go:130] >   ]
	I1213 14:49:35.029046 1297065 command_runner.go:130] > }
	I1213 14:49:35.031562 1297065 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 14:49:35.031587 1297065 containerd.go:534] Images already preloaded, skipping extraction
	I1213 14:49:35.031647 1297065 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:49:35.054892 1297065 command_runner.go:130] > {
	I1213 14:49:35.054913 1297065 command_runner.go:130] >   "images":  [
	I1213 14:49:35.054918 1297065 command_runner.go:130] >     {
	I1213 14:49:35.054928 1297065 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 14:49:35.054933 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.054939 1297065 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 14:49:35.054943 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.054947 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.054959 1297065 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 14:49:35.054966 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.054970 1297065 command_runner.go:130] >       "size":  "40636774",
	I1213 14:49:35.054977 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.054982 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.054993 1297065 command_runner.go:130] >     },
	I1213 14:49:35.054996 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055014 1297065 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 14:49:35.055021 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055030 1297065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 14:49:35.055033 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055037 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055045 1297065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 14:49:35.055049 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055053 1297065 command_runner.go:130] >       "size":  "8034419",
	I1213 14:49:35.055057 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055060 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055064 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055067 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055074 1297065 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 14:49:35.055081 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055086 1297065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 14:49:35.055092 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055104 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055117 1297065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 14:49:35.055121 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055125 1297065 command_runner.go:130] >       "size":  "21168808",
	I1213 14:49:35.055135 1297065 command_runner.go:130] >       "username":  "nonroot",
	I1213 14:49:35.055139 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055143 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055151 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055158 1297065 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 14:49:35.055162 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055169 1297065 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 14:49:35.055173 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055177 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055187 1297065 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 14:49:35.055193 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055201 1297065 command_runner.go:130] >       "size":  "21136588",
	I1213 14:49:35.055205 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055210 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.055217 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055221 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055225 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055231 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055234 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055241 1297065 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 14:49:35.055246 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055254 1297065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 14:49:35.055257 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055261 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055272 1297065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 14:49:35.055278 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055283 1297065 command_runner.go:130] >       "size":  "24678359",
	I1213 14:49:35.055286 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055294 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.055300 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055304 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055329 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055335 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055339 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055346 1297065 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 14:49:35.055352 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055358 1297065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 14:49:35.055371 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055375 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055383 1297065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 14:49:35.055388 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055392 1297065 command_runner.go:130] >       "size":  "20661043",
	I1213 14:49:35.055399 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055403 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.055410 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055415 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055422 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055425 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055428 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055435 1297065 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 14:49:35.055446 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055452 1297065 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 14:49:35.055455 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055460 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055469 1297065 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 14:49:35.055477 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055482 1297065 command_runner.go:130] >       "size":  "22429671",
	I1213 14:49:35.055486 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055494 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055497 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055500 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055511 1297065 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 14:49:35.055515 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055524 1297065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 14:49:35.055529 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055533 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055541 1297065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 14:49:35.055547 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055551 1297065 command_runner.go:130] >       "size":  "15391364",
	I1213 14:49:35.055554 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055559 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.055564 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055568 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055574 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055578 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055581 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055587 1297065 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 14:49:35.055595 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055602 1297065 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 14:49:35.055608 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055612 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055620 1297065 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 14:49:35.055626 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055630 1297065 command_runner.go:130] >       "size":  "267939",
	I1213 14:49:35.055633 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055637 1297065 command_runner.go:130] >         "value":  "65535"
	I1213 14:49:35.055651 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055655 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055659 1297065 command_runner.go:130] >       "pinned":  true
	I1213 14:49:35.055662 1297065 command_runner.go:130] >     }
	I1213 14:49:35.055666 1297065 command_runner.go:130] >   ]
	I1213 14:49:35.055669 1297065 command_runner.go:130] > }
	I1213 14:49:35.057995 1297065 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 14:49:35.058021 1297065 cache_images.go:86] Images are preloaded, skipping loading
	I1213 14:49:35.058031 1297065 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 14:49:35.058154 1297065 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-562018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 14:49:35.058232 1297065 ssh_runner.go:195] Run: sudo crictl info
	I1213 14:49:35.082362 1297065 command_runner.go:130] > {
	I1213 14:49:35.082385 1297065 command_runner.go:130] >   "cniconfig": {
	I1213 14:49:35.082391 1297065 command_runner.go:130] >     "Networks": [
	I1213 14:49:35.082395 1297065 command_runner.go:130] >       {
	I1213 14:49:35.082401 1297065 command_runner.go:130] >         "Config": {
	I1213 14:49:35.082405 1297065 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1213 14:49:35.082411 1297065 command_runner.go:130] >           "Name": "cni-loopback",
	I1213 14:49:35.082415 1297065 command_runner.go:130] >           "Plugins": [
	I1213 14:49:35.082419 1297065 command_runner.go:130] >             {
	I1213 14:49:35.082423 1297065 command_runner.go:130] >               "Network": {
	I1213 14:49:35.082427 1297065 command_runner.go:130] >                 "ipam": {},
	I1213 14:49:35.082432 1297065 command_runner.go:130] >                 "type": "loopback"
	I1213 14:49:35.082436 1297065 command_runner.go:130] >               },
	I1213 14:49:35.082446 1297065 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1213 14:49:35.082450 1297065 command_runner.go:130] >             }
	I1213 14:49:35.082457 1297065 command_runner.go:130] >           ],
	I1213 14:49:35.082467 1297065 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1213 14:49:35.082473 1297065 command_runner.go:130] >         },
	I1213 14:49:35.082488 1297065 command_runner.go:130] >         "IFName": "lo"
	I1213 14:49:35.082495 1297065 command_runner.go:130] >       }
	I1213 14:49:35.082498 1297065 command_runner.go:130] >     ],
	I1213 14:49:35.082503 1297065 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1213 14:49:35.082507 1297065 command_runner.go:130] >     "PluginDirs": [
	I1213 14:49:35.082511 1297065 command_runner.go:130] >       "/opt/cni/bin"
	I1213 14:49:35.082516 1297065 command_runner.go:130] >     ],
	I1213 14:49:35.082520 1297065 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1213 14:49:35.082527 1297065 command_runner.go:130] >     "Prefix": "eth"
	I1213 14:49:35.082530 1297065 command_runner.go:130] >   },
	I1213 14:49:35.082533 1297065 command_runner.go:130] >   "config": {
	I1213 14:49:35.082537 1297065 command_runner.go:130] >     "cdiSpecDirs": [
	I1213 14:49:35.082544 1297065 command_runner.go:130] >       "/etc/cdi",
	I1213 14:49:35.082549 1297065 command_runner.go:130] >       "/var/run/cdi"
	I1213 14:49:35.082552 1297065 command_runner.go:130] >     ],
	I1213 14:49:35.082559 1297065 command_runner.go:130] >     "cni": {
	I1213 14:49:35.082562 1297065 command_runner.go:130] >       "binDir": "",
	I1213 14:49:35.082566 1297065 command_runner.go:130] >       "binDirs": [
	I1213 14:49:35.082570 1297065 command_runner.go:130] >         "/opt/cni/bin"
	I1213 14:49:35.082573 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.082578 1297065 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1213 14:49:35.082581 1297065 command_runner.go:130] >       "confTemplate": "",
	I1213 14:49:35.082586 1297065 command_runner.go:130] >       "ipPref": "",
	I1213 14:49:35.082589 1297065 command_runner.go:130] >       "maxConfNum": 1,
	I1213 14:49:35.082593 1297065 command_runner.go:130] >       "setupSerially": false,
	I1213 14:49:35.082601 1297065 command_runner.go:130] >       "useInternalLoopback": false
	I1213 14:49:35.082604 1297065 command_runner.go:130] >     },
	I1213 14:49:35.082611 1297065 command_runner.go:130] >     "containerd": {
	I1213 14:49:35.082617 1297065 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1213 14:49:35.082622 1297065 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1213 14:49:35.082629 1297065 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1213 14:49:35.082634 1297065 command_runner.go:130] >       "runtimes": {
	I1213 14:49:35.082637 1297065 command_runner.go:130] >         "runc": {
	I1213 14:49:35.082648 1297065 command_runner.go:130] >           "ContainerAnnotations": null,
	I1213 14:49:35.082654 1297065 command_runner.go:130] >           "PodAnnotations": null,
	I1213 14:49:35.082659 1297065 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1213 14:49:35.082672 1297065 command_runner.go:130] >           "cgroupWritable": false,
	I1213 14:49:35.082676 1297065 command_runner.go:130] >           "cniConfDir": "",
	I1213 14:49:35.082680 1297065 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1213 14:49:35.082684 1297065 command_runner.go:130] >           "io_type": "",
	I1213 14:49:35.082688 1297065 command_runner.go:130] >           "options": {
	I1213 14:49:35.082693 1297065 command_runner.go:130] >             "BinaryName": "",
	I1213 14:49:35.082699 1297065 command_runner.go:130] >             "CriuImagePath": "",
	I1213 14:49:35.082703 1297065 command_runner.go:130] >             "CriuWorkPath": "",
	I1213 14:49:35.082707 1297065 command_runner.go:130] >             "IoGid": 0,
	I1213 14:49:35.082714 1297065 command_runner.go:130] >             "IoUid": 0,
	I1213 14:49:35.082719 1297065 command_runner.go:130] >             "NoNewKeyring": false,
	I1213 14:49:35.082725 1297065 command_runner.go:130] >             "Root": "",
	I1213 14:49:35.082729 1297065 command_runner.go:130] >             "ShimCgroup": "",
	I1213 14:49:35.082743 1297065 command_runner.go:130] >             "SystemdCgroup": false
	I1213 14:49:35.082746 1297065 command_runner.go:130] >           },
	I1213 14:49:35.082751 1297065 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1213 14:49:35.082758 1297065 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1213 14:49:35.082765 1297065 command_runner.go:130] >           "runtimePath": "",
	I1213 14:49:35.082769 1297065 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1213 14:49:35.082774 1297065 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1213 14:49:35.082778 1297065 command_runner.go:130] >           "snapshotter": ""
	I1213 14:49:35.082784 1297065 command_runner.go:130] >         }
	I1213 14:49:35.082787 1297065 command_runner.go:130] >       }
	I1213 14:49:35.082790 1297065 command_runner.go:130] >     },
	I1213 14:49:35.082801 1297065 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1213 14:49:35.082809 1297065 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1213 14:49:35.082816 1297065 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1213 14:49:35.082820 1297065 command_runner.go:130] >     "disableApparmor": false,
	I1213 14:49:35.082825 1297065 command_runner.go:130] >     "disableHugetlbController": true,
	I1213 14:49:35.082832 1297065 command_runner.go:130] >     "disableProcMount": false,
	I1213 14:49:35.082839 1297065 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1213 14:49:35.082845 1297065 command_runner.go:130] >     "enableCDI": true,
	I1213 14:49:35.082850 1297065 command_runner.go:130] >     "enableSelinux": false,
	I1213 14:49:35.082857 1297065 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1213 14:49:35.082862 1297065 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1213 14:49:35.082866 1297065 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1213 14:49:35.082871 1297065 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1213 14:49:35.082875 1297065 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1213 14:49:35.082880 1297065 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1213 14:49:35.082887 1297065 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1213 14:49:35.082893 1297065 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1213 14:49:35.082904 1297065 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1213 14:49:35.082910 1297065 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1213 14:49:35.082915 1297065 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1213 14:49:35.082926 1297065 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1213 14:49:35.082932 1297065 command_runner.go:130] >   },
	I1213 14:49:35.082936 1297065 command_runner.go:130] >   "features": {
	I1213 14:49:35.082943 1297065 command_runner.go:130] >     "supplemental_groups_policy": true
	I1213 14:49:35.082946 1297065 command_runner.go:130] >   },
	I1213 14:49:35.082950 1297065 command_runner.go:130] >   "golang": "go1.24.9",
	I1213 14:49:35.082959 1297065 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 14:49:35.082976 1297065 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 14:49:35.082980 1297065 command_runner.go:130] >   "runtimeHandlers": [
	I1213 14:49:35.082984 1297065 command_runner.go:130] >     {
	I1213 14:49:35.082988 1297065 command_runner.go:130] >       "features": {
	I1213 14:49:35.083000 1297065 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 14:49:35.083004 1297065 command_runner.go:130] >         "user_namespaces": true
	I1213 14:49:35.083008 1297065 command_runner.go:130] >       }
	I1213 14:49:35.083012 1297065 command_runner.go:130] >     },
	I1213 14:49:35.083017 1297065 command_runner.go:130] >     {
	I1213 14:49:35.083021 1297065 command_runner.go:130] >       "features": {
	I1213 14:49:35.083026 1297065 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 14:49:35.083033 1297065 command_runner.go:130] >         "user_namespaces": true
	I1213 14:49:35.083041 1297065 command_runner.go:130] >       },
	I1213 14:49:35.083055 1297065 command_runner.go:130] >       "name": "runc"
	I1213 14:49:35.083058 1297065 command_runner.go:130] >     }
	I1213 14:49:35.083061 1297065 command_runner.go:130] >   ],
	I1213 14:49:35.083064 1297065 command_runner.go:130] >   "status": {
	I1213 14:49:35.083068 1297065 command_runner.go:130] >     "conditions": [
	I1213 14:49:35.083077 1297065 command_runner.go:130] >       {
	I1213 14:49:35.083081 1297065 command_runner.go:130] >         "message": "",
	I1213 14:49:35.083085 1297065 command_runner.go:130] >         "reason": "",
	I1213 14:49:35.083089 1297065 command_runner.go:130] >         "status": true,
	I1213 14:49:35.083098 1297065 command_runner.go:130] >         "type": "RuntimeReady"
	I1213 14:49:35.083104 1297065 command_runner.go:130] >       },
	I1213 14:49:35.083107 1297065 command_runner.go:130] >       {
	I1213 14:49:35.083113 1297065 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1213 14:49:35.083118 1297065 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1213 14:49:35.083122 1297065 command_runner.go:130] >         "status": false,
	I1213 14:49:35.083128 1297065 command_runner.go:130] >         "type": "NetworkReady"
	I1213 14:49:35.083132 1297065 command_runner.go:130] >       },
	I1213 14:49:35.083135 1297065 command_runner.go:130] >       {
	I1213 14:49:35.083160 1297065 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1213 14:49:35.083171 1297065 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1213 14:49:35.083176 1297065 command_runner.go:130] >         "status": false,
	I1213 14:49:35.083182 1297065 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1213 14:49:35.083186 1297065 command_runner.go:130] >       }
	I1213 14:49:35.083190 1297065 command_runner.go:130] >     ]
	I1213 14:49:35.083196 1297065 command_runner.go:130] >   }
	I1213 14:49:35.083199 1297065 command_runner.go:130] > }
	I1213 14:49:35.086343 1297065 cni.go:84] Creating CNI manager for ""
	I1213 14:49:35.086370 1297065 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 14:49:35.086397 1297065 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 14:49:35.086420 1297065 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-562018 NodeName:functional-562018 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 14:49:35.086540 1297065 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-562018"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 14:49:35.086621 1297065 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 14:49:35.094718 1297065 command_runner.go:130] > kubeadm
	I1213 14:49:35.094739 1297065 command_runner.go:130] > kubectl
	I1213 14:49:35.094743 1297065 command_runner.go:130] > kubelet
	I1213 14:49:35.094761 1297065 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 14:49:35.094814 1297065 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 14:49:35.102589 1297065 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 14:49:35.115905 1297065 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 14:49:35.129462 1297065 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 14:49:35.142335 1297065 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 14:49:35.146161 1297065 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 14:49:35.146280 1297065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:49:35.271079 1297065 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:49:35.585791 1297065 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018 for IP: 192.168.49.2
	I1213 14:49:35.585864 1297065 certs.go:195] generating shared ca certs ...
	I1213 14:49:35.585895 1297065 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:49:35.586063 1297065 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 14:49:35.586138 1297065 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 14:49:35.586175 1297065 certs.go:257] generating profile certs ...
	I1213 14:49:35.586327 1297065 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key
	I1213 14:49:35.586437 1297065 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key.d0505aee
	I1213 14:49:35.586523 1297065 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key
	I1213 14:49:35.586557 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 14:49:35.586602 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 14:49:35.586632 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 14:49:35.586672 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 14:49:35.586707 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 14:49:35.586737 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 14:49:35.586777 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 14:49:35.586811 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 14:49:35.586902 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 14:49:35.586962 1297065 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 14:49:35.586986 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 14:49:35.587046 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 14:49:35.587098 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 14:49:35.587157 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 14:49:35.587232 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 14:49:35.587302 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.587371 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem -> /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.587399 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.588006 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 14:49:35.609077 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 14:49:35.630697 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 14:49:35.652426 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 14:49:35.670342 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 14:49:35.687837 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 14:49:35.705877 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 14:49:35.723466 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 14:49:35.740679 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 14:49:35.758304 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 14:49:35.776736 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 14:49:35.794339 1297065 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 14:49:35.806740 1297065 ssh_runner.go:195] Run: openssl version
	I1213 14:49:35.812461 1297065 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 14:49:35.812883 1297065 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.820227 1297065 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 14:49:35.827978 1297065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.831610 1297065 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.831636 1297065 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.831688 1297065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.871766 1297065 command_runner.go:130] > b5213941
	I1213 14:49:35.872189 1297065 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 14:49:35.879531 1297065 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.886529 1297065 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 14:49:35.894015 1297065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.897550 1297065 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.897859 1297065 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.897930 1297065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.938203 1297065 command_runner.go:130] > 51391683
	I1213 14:49:35.938708 1297065 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 14:49:35.946069 1297065 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.953176 1297065 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 14:49:35.960486 1297065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.964477 1297065 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.964589 1297065 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.964665 1297065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 14:49:36.007360 1297065 command_runner.go:130] > 3ec20f2e
	I1213 14:49:36.007602 1297065 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 14:49:36.019390 1297065 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 14:49:36.024551 1297065 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 14:49:36.024587 1297065 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 14:49:36.024604 1297065 command_runner.go:130] > Device: 259,1	Inode: 2346070     Links: 1
	I1213 14:49:36.024612 1297065 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 14:49:36.024618 1297065 command_runner.go:130] > Access: 2025-12-13 14:45:28.579602026 +0000
	I1213 14:49:36.024623 1297065 command_runner.go:130] > Modify: 2025-12-13 14:41:24.464587226 +0000
	I1213 14:49:36.024628 1297065 command_runner.go:130] > Change: 2025-12-13 14:41:24.464587226 +0000
	I1213 14:49:36.024634 1297065 command_runner.go:130] >  Birth: 2025-12-13 14:41:24.464587226 +0000
	I1213 14:49:36.024743 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 14:49:36.067430 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.067964 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 14:49:36.109753 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.110299 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 14:49:36.151650 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.152123 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 14:49:36.199598 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.200366 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 14:49:36.241923 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.242478 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 14:49:36.282927 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.283387 1297065 kubeadm.go:401] StartCluster: {Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:49:36.283480 1297065 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 14:49:36.283586 1297065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 14:49:36.308975 1297065 cri.go:89] found id: ""
	I1213 14:49:36.309092 1297065 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 14:49:36.316103 1297065 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 14:49:36.316129 1297065 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 14:49:36.316138 1297065 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 14:49:36.317085 1297065 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 14:49:36.317145 1297065 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 14:49:36.317231 1297065 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 14:49:36.324724 1297065 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:49:36.325158 1297065 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-562018" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:36.325271 1297065 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-1251074/kubeconfig needs updating (will repair): [kubeconfig missing "functional-562018" cluster setting kubeconfig missing "functional-562018" context setting]
	I1213 14:49:36.325603 1297065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:49:36.326011 1297065 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:36.326154 1297065 kapi.go:59] client config for functional-562018: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key", CAFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 14:49:36.326701 1297065 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 14:49:36.326719 1297065 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 14:49:36.326724 1297065 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 14:49:36.326733 1297065 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 14:49:36.326744 1297065 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 14:49:36.327001 1297065 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 14:49:36.327093 1297065 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 14:49:36.334496 1297065 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 14:49:36.334531 1297065 kubeadm.go:602] duration metric: took 17.366177ms to restartPrimaryControlPlane
	I1213 14:49:36.334540 1297065 kubeadm.go:403] duration metric: took 51.160034ms to StartCluster
	I1213 14:49:36.334555 1297065 settings.go:142] acquiring lock: {Name:mk3e38eb9635dd950ddf8081cc0598a8c10049c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:49:36.334613 1297065 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:36.335214 1297065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:49:36.335450 1297065 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 14:49:36.335789 1297065 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:49:36.335866 1297065 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 14:49:36.335932 1297065 addons.go:70] Setting storage-provisioner=true in profile "functional-562018"
	I1213 14:49:36.335945 1297065 addons.go:239] Setting addon storage-provisioner=true in "functional-562018"
	I1213 14:49:36.335975 1297065 host.go:66] Checking if "functional-562018" exists ...
	I1213 14:49:36.336461 1297065 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:49:36.336835 1297065 addons.go:70] Setting default-storageclass=true in profile "functional-562018"
	I1213 14:49:36.336857 1297065 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-562018"
	I1213 14:49:36.337151 1297065 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:49:36.340699 1297065 out.go:179] * Verifying Kubernetes components...
	I1213 14:49:36.343477 1297065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:49:36.374082 1297065 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 14:49:36.376797 1297065 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:36.376892 1297065 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:36.376917 1297065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 14:49:36.376979 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:36.377245 1297065 kapi.go:59] client config for functional-562018: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key", CAFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 14:49:36.377532 1297065 addons.go:239] Setting addon default-storageclass=true in "functional-562018"
	I1213 14:49:36.377566 1297065 host.go:66] Checking if "functional-562018" exists ...
	I1213 14:49:36.377992 1297065 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:49:36.415567 1297065 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:36.415590 1297065 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 14:49:36.415656 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:36.416969 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:36.442534 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:36.534721 1297065 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:49:36.592567 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:36.600370 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:37.335898 1297065 node_ready.go:35] waiting up to 6m0s for node "functional-562018" to be "Ready" ...
	I1213 14:49:37.335934 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:37.336074 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.336106 1297065 retry.go:31] will retry after 199.574589ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.336165 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:37.336178 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.336184 1297065 retry.go:31] will retry after 285.216803ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.336272 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:37.336359 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:37.336679 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:37.536000 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:37.591050 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:37.594766 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.594797 1297065 retry.go:31] will retry after 489.410948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.621926 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:37.677113 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:37.681307 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.681342 1297065 retry.go:31] will retry after 401.770697ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.836587 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:37.836683 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:37.837004 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:38.083592 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:38.085139 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:38.190416 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:38.194296 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.194326 1297065 retry.go:31] will retry after 757.686696ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.207705 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:38.207792 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.207830 1297065 retry.go:31] will retry after 505.194475ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.337091 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:38.337186 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:38.337548 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:38.714015 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:38.783498 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:38.783559 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.783593 1297065 retry.go:31] will retry after 988.219406ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.836722 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:38.836873 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:38.837238 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:38.952600 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:39.020705 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:39.020749 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:39.020768 1297065 retry.go:31] will retry after 1.072702638s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:39.337164 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:39.337235 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:39.337545 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:39.337593 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:39.772102 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:39.836685 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:39.836850 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:39.837201 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:39.843566 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:39.843633 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:39.843675 1297065 retry.go:31] will retry after 1.296209829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:40.093780 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:40.156222 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:40.156329 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:40.156372 1297065 retry.go:31] will retry after 965.768616ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:40.336552 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:40.336651 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:40.336970 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:40.836822 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:40.836895 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:40.837217 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:41.122779 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:41.140323 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:41.215097 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:41.215182 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:41.215214 1297065 retry.go:31] will retry after 2.369565148s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:41.219568 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:41.219636 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:41.219656 1297065 retry.go:31] will retry after 2.455142313s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:41.336947 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:41.337019 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:41.337416 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:41.837047 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:41.837124 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:41.837388 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:41.837438 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:42.337111 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:42.337201 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:42.337621 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:42.836363 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:42.836444 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:42.836803 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:43.336226 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:43.336304 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:43.336552 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:43.585084 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:43.645189 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:43.649081 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:43.649137 1297065 retry.go:31] will retry after 3.995275361s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:43.675423 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:43.738811 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:43.738856 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:43.738876 1297065 retry.go:31] will retry after 3.319355388s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:43.837038 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:43.837127 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:43.837467 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:43.837521 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:44.336215 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:44.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:44.336669 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:44.836348 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:44.836422 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:44.836715 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:45.336277 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:45.336359 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:45.336715 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:45.836385 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:45.836482 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:45.836839 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:46.336842 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:46.336917 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:46.337174 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:46.337224 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:46.836567 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:46.836641 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:46.837050 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:47.058405 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:47.140540 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:47.144585 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:47.144615 1297065 retry.go:31] will retry after 3.814662677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:47.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:47.336338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:47.336673 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:47.645178 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:47.704569 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:47.708191 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:47.708226 1297065 retry.go:31] will retry after 4.571128182s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:47.836452 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:47.836522 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:47.836787 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:48.336260 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:48.336350 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:48.336628 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:48.836239 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:48.836318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:48.836676 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:48.836743 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:49.336220 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:49.336290 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:49.336531 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:49.836286 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:49.836362 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:49.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:50.336380 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:50.336455 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:50.336799 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:50.836292 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:50.836366 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:50.836651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:50.960127 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:51.026705 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:51.026749 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:51.026767 1297065 retry.go:31] will retry after 9.152833031s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:51.336157 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:51.336252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:51.336592 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:51.336645 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:51.836328 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:51.836439 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:51.836752 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:52.280634 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:52.336265 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:52.336348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:52.336649 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:52.351151 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:52.351191 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:52.351210 1297065 retry.go:31] will retry after 6.806315756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:52.837084 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:52.837176 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:52.837503 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:53.336231 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:53.336347 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:53.336679 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:53.336735 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:53.836278 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:53.836358 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:53.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:54.336375 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:54.336453 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:54.336787 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:54.836534 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:54.836609 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:54.836960 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:55.336530 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:55.336608 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:55.336965 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:55.337034 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:55.836817 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:55.836889 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:55.837215 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:56.337019 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:56.337095 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:56.337433 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:56.836157 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:56.836242 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:56.836511 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:57.336450 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:57.336529 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:57.336899 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:57.836240 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:57.836331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:57.836629 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:57.836681 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:58.336184 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:58.336276 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:58.336593 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:58.836286 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:58.836386 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:58.836728 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:59.158224 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:59.216557 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:59.216609 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:59.216627 1297065 retry.go:31] will retry after 13.782587086s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:59.336899 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:59.336976 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:59.337309 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:59.837050 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:59.837117 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:59.837393 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:59.837436 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:00.179978 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:50:00.336210 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:00.336301 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:00.337482 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 14:50:00.358964 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:00.359008 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:00.359030 1297065 retry.go:31] will retry after 12.357990487s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:00.836789 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:00.836882 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:00.837203 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:01.336532 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:01.336609 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:01.336921 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:01.836255 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:01.836341 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:01.836652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:02.336512 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:02.336592 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:02.336956 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:02.337013 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:02.836530 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:02.836611 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:02.836888 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:03.336279 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:03.336358 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:03.336701 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:03.836401 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:03.836474 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:03.836845 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:04.336380 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:04.336454 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:04.336784 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:04.836248 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:04.836328 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:04.836660 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:04.836716 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:05.336407 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:05.336490 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:05.336806 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:05.836467 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:05.836548 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:05.836865 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:06.336870 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:06.336953 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:06.337350 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:06.837024 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:06.837097 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:06.837419 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:06.837478 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:07.336416 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:07.336488 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:07.336747 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:07.836490 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:07.836567 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:07.836906 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:08.336625 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:08.336699 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:08.337020 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:08.836518 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:08.836588 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:08.836865 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:09.336612 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:09.336692 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:09.337049 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:09.337109 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:09.836858 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:09.836939 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:09.837272 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:10.337051 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:10.337125 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:10.337387 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:10.837153 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:10.837234 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:10.837582 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:11.336183 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:11.336261 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:11.336582 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:11.836188 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:11.836256 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:11.836517 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:11.836567 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:12.336515 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:12.336595 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:12.336916 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:12.717305 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:50:12.775348 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:12.775393 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:12.775414 1297065 retry.go:31] will retry after 16.474515121s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:12.836567 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:12.836650 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:12.837019 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:13.000372 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:50:13.059399 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:13.063613 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:13.063652 1297065 retry.go:31] will retry after 8.071550656s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:13.336122 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:13.336199 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:13.336467 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:13.836136 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:13.836218 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:13.836591 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:13.836660 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:14.336363 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:14.336438 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:14.336774 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:14.836188 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:14.836261 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:14.836540 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:15.336277 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:15.336353 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:15.336677 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:15.836219 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:15.836293 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:15.836588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:16.336546 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:16.336617 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:16.336864 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:16.336904 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:16.836265 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:16.836348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:16.836648 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:17.336586 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:17.336661 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:17.337008 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:17.836520 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:17.836585 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:17.836858 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:18.336257 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:18.336354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:18.336715 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:18.836428 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:18.836502 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:18.836842 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:18.836899 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:19.336242 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:19.336311 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:19.336563 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:19.836231 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:19.836306 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:19.836619 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:20.336334 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:20.336416 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:20.336797 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:20.836189 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:20.836268 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:20.836618 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:21.136217 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:50:21.193283 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:21.196963 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:21.196996 1297065 retry.go:31] will retry after 15.530830741s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:21.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:21.336329 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:21.336627 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:21.336677 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:21.836352 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:21.836433 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:21.836751 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:22.336543 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:22.336615 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:22.336948 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:22.836275 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:22.836348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:22.836696 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:23.336400 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:23.336482 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:23.336828 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:23.336887 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:23.836198 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:23.836296 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:23.836661 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:24.336254 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:24.336329 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:24.336641 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:24.836327 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:24.836403 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:24.836743 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:25.336278 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:25.336354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:25.336703 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:25.836226 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:25.836309 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:25.836679 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:25.836740 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:26.337200 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:26.337293 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:26.337628 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:26.836405 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:26.836480 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:26.836777 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:27.336562 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:27.336653 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:27.337005 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:27.836230 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:27.836307 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:27.836657 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:28.336177 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:28.336267 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:28.336587 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:28.336638 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:28.836250 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:28.836332 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:28.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:29.250199 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:50:29.308318 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:29.311716 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:29.311747 1297065 retry.go:31] will retry after 30.463725654s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:29.336999 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:29.337080 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:29.337458 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:29.836155 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:29.836222 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:29.836520 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:30.336243 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:30.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:30.336620 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:30.336669 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:30.836228 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:30.836325 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:30.836623 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:31.336183 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:31.336285 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:31.336588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:31.836269 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:31.836340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:31.836687 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:32.336490 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:32.336568 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:32.336902 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:32.336957 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:32.836192 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:32.836262 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:32.836598 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:33.336250 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:33.336349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:33.336707 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:33.836245 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:33.836323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:33.836664 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:34.336184 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:34.336253 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:34.336535 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:34.836284 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:34.836360 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:34.836791 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:34.836848 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:35.336516 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:35.336596 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:35.336938 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:35.836527 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:35.836598 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:35.836864 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:36.336942 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:36.337020 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:36.337342 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:36.728993 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:50:36.785078 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:36.788836 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:36.788868 1297065 retry.go:31] will retry after 31.693829046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:36.837069 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:36.837145 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:36.837461 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:36.837513 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:37.336193 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:37.336260 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:37.336549 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:37.836234 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:37.836312 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:37.836628 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:38.336250 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:38.336329 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:38.336672 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:38.836242 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:38.836323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:38.836588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:39.336229 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:39.336304 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:39.336650 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:39.336709 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:39.836236 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:39.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:39.836618 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:40.336280 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:40.336355 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:40.336614 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:40.836272 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:40.836382 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:40.836789 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:41.336524 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:41.336601 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:41.336927 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:41.336987 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:41.836201 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:41.836278 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:41.836626 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:42.336633 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:42.336716 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:42.337072 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:42.836881 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:42.836955 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:42.837306 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:43.337071 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:43.337144 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:43.337415 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:43.337468 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:43.836983 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:43.837056 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:43.837412 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:44.336153 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:44.336229 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:44.336573 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:44.836267 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:44.836356 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:44.836695 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:45.336450 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:45.336538 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:45.336949 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:45.836752 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:45.836829 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:45.837178 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:45.837235 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:46.336981 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:46.337060 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:46.337351 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:46.836213 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:46.836319 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:46.836733 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:47.336523 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:47.336680 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:47.336969 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:47.836511 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:47.836579 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:47.836844 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:48.336215 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:48.336310 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:48.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:48.336704 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:48.836371 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:48.836487 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:48.836832 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:49.336188 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:49.336255 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:49.336544 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:49.836263 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:49.836365 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:49.836653 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:50.336392 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:50.336468 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:50.336808 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:50.336866 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:50.836325 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:50.836439 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:50.836713 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:51.336252 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:51.336346 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:51.336691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:51.836280 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:51.836353 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:51.836671 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:52.336550 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:52.336630 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:52.336893 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:52.336943 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:52.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:52.836322 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:52.836667 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:53.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:53.336342 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:53.336699 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:53.836191 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:53.836264 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:53.836543 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:54.336246 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:54.336345 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:54.336700 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:54.836402 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:54.836475 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:54.836811 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:54.836869 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:55.336360 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:55.336437 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:55.336727 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:55.836432 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:55.836512 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:55.836850 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:56.337034 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:56.337132 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:56.337451 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:56.836142 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:56.836214 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:56.836473 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:57.336469 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:57.336554 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:57.336893 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:57.336949 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:57.836297 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:57.836381 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:57.836714 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:58.336385 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:58.336465 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:58.336743 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:58.836460 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:58.836541 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:58.836889 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:59.336265 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:59.336340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:59.336697 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:59.776318 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:50:59.836157 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:59.836232 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:59.836466 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:59.836509 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:59.839555 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:59.839592 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:59.839611 1297065 retry.go:31] will retry after 31.022889465s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:51:00.336284 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:00.336385 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:00.337017 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:00.836870 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:00.836951 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:00.837274 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:01.337018 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:01.337093 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:01.337377 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:01.836106 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:01.836178 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:01.836532 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:01.836591 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:02.336582 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:02.336658 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:02.336989 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:02.836525 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:02.836602 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:02.836897 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:03.336270 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:03.336349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:03.336732 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:03.836448 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:03.836526 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:03.836864 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:03.836920 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:04.336555 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:04.336630 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:04.336888 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:04.836543 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:04.836644 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:04.836971 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:05.336771 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:05.336847 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:05.337186 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:05.836528 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:05.836603 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:05.836858 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:06.336901 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:06.336978 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:06.337275 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:06.337322 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:06.836616 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:06.836698 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:06.837028 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:07.336511 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:07.336579 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:07.336834 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:07.836242 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:07.836317 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:07.836644 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:08.336229 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:08.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:08.336668 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:08.482933 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:51:08.546772 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:51:08.546820 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:51:08.546914 1297065 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 14:51:08.836114 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:08.836184 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:08.836454 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:08.836495 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:09.336176 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:09.336252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:09.336597 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:09.836346 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:09.836424 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:09.836727 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:10.336174 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:10.336265 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:10.336548 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:10.836166 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:10.836272 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:10.836571 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:10.836621 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:11.336180 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:11.336265 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:11.336582 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:11.836217 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:11.836300 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:11.836626 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:12.336568 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:12.336663 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:12.336970 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:12.836801 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:12.836879 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:12.837247 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:12.837301 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:13.336980 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:13.337062 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:13.337320 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:13.837125 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:13.837211 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:13.837540 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:14.336301 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:14.336390 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:14.336757 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:14.836166 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:14.836241 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:14.836499 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:15.336228 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:15.336300 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:15.336648 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:15.336706 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:15.836385 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:15.836461 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:15.836787 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:16.336816 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:16.336889 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:16.337169 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:16.836948 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:16.837028 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:16.837350 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:17.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:17.336319 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:17.336641 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:17.836172 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:17.836252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:17.836555 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:17.836606 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:18.336236 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:18.336313 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:18.336641 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:18.836373 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:18.836452 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:18.836760 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:19.336167 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:19.336238 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:19.336538 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:19.836221 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:19.836297 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:19.836617 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:19.836675 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:20.336339 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:20.336412 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:20.336771 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:20.836180 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:20.836251 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:20.836567 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:21.336259 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:21.336338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:21.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:21.836380 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:21.836462 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:21.836800 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:21.836855 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:22.336522 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:22.336591 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:22.336867 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:22.836547 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:22.836626 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:22.836957 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:23.336750 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:23.336825 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:23.337141 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:23.836507 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:23.836582 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:23.836841 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:23.836883 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:24.336607 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:24.336681 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:24.337016 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:24.836840 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:24.836916 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:24.837240 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:25.336547 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:25.336619 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:25.336933 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:25.836630 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:25.836712 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:25.837049 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:25.837104 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:26.337004 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:26.337079 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:26.337406 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:26.836128 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:26.836203 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:26.836467 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:27.336400 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:27.336476 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:27.336833 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:27.836240 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:27.836325 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:27.836680 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:28.336379 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:28.336452 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:28.336710 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:28.336750 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:28.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:28.836333 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:28.836661 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:29.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:29.336331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:29.336705 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:29.836347 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:29.836422 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:29.836690 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:30.336280 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:30.336351 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:30.336706 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:30.836402 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:30.836499 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:30.836836 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:30.836891 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:30.863046 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:51:30.922204 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:51:30.922247 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:51:30.922363 1297065 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 14:51:30.925463 1297065 out.go:179] * Enabled addons: 
	I1213 14:51:30.929007 1297065 addons.go:530] duration metric: took 1m54.593151344s for enable addons: enabled=[]
	I1213 14:51:31.336478 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:31.336574 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:31.336911 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:31.836663 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:31.836742 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:31.837400 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:32.336285 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:32.336362 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:32.337832 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 14:51:32.836218 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:32.836313 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:32.836623 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:33.336237 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:33.336311 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:33.336634 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:33.336688 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:33.836221 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:33.836296 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:33.836630 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:34.336182 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:34.336256 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:34.336569 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:34.836237 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:34.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:34.836646 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:35.336246 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:35.336327 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:35.336677 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:35.336739 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:35.836381 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:35.836450 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:35.836754 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:36.336847 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:36.336928 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:36.337255 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:36.836533 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:36.836613 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:36.836939 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:37.336507 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:37.336573 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:37.336839 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:37.336879 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:37.836228 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:37.836301 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:37.836594 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:38.336263 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:38.336341 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:38.336675 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:38.836200 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:38.836285 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:38.836811 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:39.336276 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:39.336373 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:39.336728 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:39.836259 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:39.836349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:39.836684 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:39.836742 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:40.336215 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:40.336295 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:40.336618 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:40.836435 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:40.836524 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:40.836905 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:41.336775 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:41.336851 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:41.337175 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:41.836559 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:41.836631 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:41.836894 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:41.836936 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:42.336658 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:42.336748 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:42.337128 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:42.836909 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:42.836987 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:42.837289 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:43.337040 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:43.337127 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:43.337474 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:43.836192 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:43.836275 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:43.836624 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:44.336291 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:44.336388 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:44.336784 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:44.336841 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:44.836180 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:44.836254 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:44.836551 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:45.336321 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:45.336400 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:45.336690 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:45.836435 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:45.836510 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:45.836833 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:46.336779 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:46.336848 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:46.337141 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:46.337201 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:46.836518 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:46.836596 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:46.836935 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:47.336893 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:47.336971 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:47.337308 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:47.836529 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:47.836614 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:47.836876 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:48.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:48.336337 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:48.336692 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:48.836415 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:48.836494 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:48.836834 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:48.836892 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:49.336543 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:49.336621 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:49.336897 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:49.836323 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:49.836400 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:49.836703 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:50.336284 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:50.336361 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:50.336695 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:50.836346 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:50.836424 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:50.836742 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:51.336225 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:51.336303 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:51.336650 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:51.336709 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:51.836373 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:51.836452 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:51.836792 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:52.336469 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:52.336538 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:52.336793 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:52.836252 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:52.836345 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:52.836678 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:53.336269 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:53.336349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:53.336675 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:53.336740 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:53.836126 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:53.836205 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:53.836462 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:54.336204 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:54.336277 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:54.336594 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:54.836224 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:54.836301 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:54.836659 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:55.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:55.336331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:55.336616 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:55.836312 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:55.836389 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:55.836728 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:55.836782 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:56.336654 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:56.336732 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:56.337071 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:56.836525 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:56.836605 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:56.836887 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:57.336719 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:57.336796 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:57.337143 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:57.836841 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:57.836920 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:57.837247 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:57.837302 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:58.337040 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:58.337110 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:58.337364 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:58.837119 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:58.837198 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:58.837538 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:59.336279 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:59.336367 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:59.336734 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:59.836438 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:59.836511 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:59.836774 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:00.355395 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:00.355523 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:00.355852 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:00.355945 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:00.836731 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:00.836813 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:00.837145 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:01.336514 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:01.336595 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:01.336898 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:01.836757 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:01.836832 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:01.837174 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:02.336946 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:02.337023 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:02.337363 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:02.836523 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:02.836599 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:02.836906 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:02.836965 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:03.336199 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:03.336271 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:03.336598 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:03.836313 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:03.836395 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:03.836725 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:04.336141 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:04.336218 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:04.336472 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:04.836200 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:04.836276 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:04.836590 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:05.336247 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:05.336324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:05.336654 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:05.336712 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:05.836185 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:05.836257 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:05.836570 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:06.336596 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:06.336670 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:06.337028 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:06.836851 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:06.836932 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:06.837278 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:07.337039 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:07.337104 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:07.337364 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:07.337404 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:07.837188 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:07.837264 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:07.837630 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:08.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:08.336324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:08.336644 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:08.836195 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:08.836269 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:08.836588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:09.336284 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:09.336374 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:09.336691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:09.836403 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:09.836488 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:09.836831 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:09.836885 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:10.336187 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:10.336264 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:10.336588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:10.836238 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:10.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:10.836652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:11.336250 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:11.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:11.336683 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:11.836362 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:11.836437 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:11.836693 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:12.336616 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:12.336691 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:12.337039 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:12.337098 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:12.836854 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:12.836931 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:12.837269 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:13.337012 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:13.337077 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:13.337331 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:13.837136 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:13.837214 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:13.837562 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:14.336275 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:14.336353 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:14.336653 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:14.836184 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:14.836257 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:14.836550 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:14.836598 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:15.336241 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:15.336321 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:15.336652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:15.836388 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:15.836477 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:15.836811 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:16.336837 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:16.336907 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:16.337175 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:16.836969 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:16.837065 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:16.837433 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:16.837491 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:17.336248 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:17.336323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:17.336684 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:17.836232 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:17.836298 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:17.836601 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:18.336212 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:18.336289 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:18.336616 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:18.836403 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:18.836489 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:18.836838 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:19.336195 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:19.336269 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:19.336594 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:19.336650 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:19.836346 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:19.836429 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:19.836796 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:20.336507 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:20.336589 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:20.336899 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:20.836181 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:20.836258 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:20.836517 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:21.336228 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:21.336302 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:21.336632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:21.336692 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:21.836262 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:21.836338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:21.836657 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:22.336526 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:22.336604 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:22.336882 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:22.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:22.836317 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:22.836632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:23.336264 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:23.336373 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:23.336709 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:23.336768 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:23.836406 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:23.836477 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:23.836791 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:24.336241 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:24.336336 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:24.336674 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:24.836391 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:24.836474 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:24.836800 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:25.336188 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:25.336256 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:25.336595 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:25.836360 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:25.836444 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:25.836782 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:25.836842 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:26.336659 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:26.336742 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:26.337133 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:26.836533 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:26.836602 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:26.836915 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:27.336718 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:27.336789 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:27.337149 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:27.836949 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:27.837024 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:27.837383 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:27.837440 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:28.337164 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:28.337233 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:28.337486 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:28.836203 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:28.836284 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:28.836624 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:29.336359 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:29.336444 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:29.336786 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:29.836473 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:29.836542 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:29.836800 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:30.336266 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:30.336346 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:30.336712 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:30.336778 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:30.836448 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:30.836530 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:30.836895 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:31.336594 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:31.336667 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:31.336934 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:31.836251 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:31.836334 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:31.836670 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:32.336445 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:32.336545 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:32.336826 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:32.336874 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:32.836183 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:32.836254 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:32.836608 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:33.336221 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:33.336296 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:33.336654 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:33.836240 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:33.836319 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:33.836658 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:34.336330 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:34.336399 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:34.336664 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:34.836346 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:34.836426 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:34.836772 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:34.836831 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:35.336328 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:35.336410 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:35.336774 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:35.836188 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:35.836261 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:35.836582 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:36.336650 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:36.336733 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:36.337068 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:36.836880 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:36.836955 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:36.837277 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:36.837337 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:37.336192 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:37.336266 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:37.336525 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:37.836226 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:37.836309 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:37.836638 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:38.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:38.336322 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:38.336672 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:38.836202 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:38.836273 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:38.836547 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:39.336280 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:39.336358 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:39.336701 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:39.336754 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:39.836426 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:39.836508 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:39.836821 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:40.336191 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:40.336263 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:40.336564 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:40.836260 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:40.836361 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:40.836721 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:41.336424 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:41.336505 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:41.336831 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:41.336888 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:41.836209 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:41.836299 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:41.836584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:42.336696 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:42.336785 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:42.337191 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:42.836996 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:42.837071 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:42.837403 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:43.336118 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:43.336196 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:43.336449 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:43.836158 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:43.836243 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:43.836549 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:43.836602 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:44.336242 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:44.336324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:44.336613 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:44.836191 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:44.836266 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:44.836521 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:45.336296 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:45.336373 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:45.336712 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:45.836269 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:45.836353 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:45.836712 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:45.836772 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:46.336576 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:46.336646 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:46.336952 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:46.836262 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:46.836354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:46.836668 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:47.336578 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:47.336658 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:47.336990 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:47.836518 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:47.836598 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:47.836865 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:47.836918 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:48.336636 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:48.336714 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:48.337035 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:48.836837 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:48.836909 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:48.837235 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:49.336532 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:49.336609 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:49.336874 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:49.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:49.836320 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:49.836663 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:50.336257 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:50.336343 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:50.336683 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:50.336737 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:50.836244 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:50.836318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:50.836588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:51.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:51.336340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:51.336658 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:51.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:51.836323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:51.836652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:52.336454 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:52.336534 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:52.336820 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:52.336867 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:52.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:52.836323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:52.836674 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:53.336385 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:53.336470 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:53.336787 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:53.836193 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:53.836269 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:53.836583 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:54.336266 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:54.336348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:54.336708 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:54.836271 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:54.836348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:54.836719 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:54.836775 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:55.336414 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:55.336481 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:55.336738 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:55.836424 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:55.836502 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:55.836840 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:56.336926 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:56.337006 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:56.337393 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:56.837161 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:56.837240 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:56.837514 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:56.837556 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:57.336486 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:57.336562 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:57.336904 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:57.836247 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:57.836326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:57.836646 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:58.336169 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:58.336261 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:58.336585 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:58.836253 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:58.836338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:58.836686 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:59.336405 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:59.336488 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:59.336818 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:59.336881 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:59.836205 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:59.836279 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:59.836602 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:00.336348 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:00.336434 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:00.336755 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:00.836458 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:00.836538 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:00.836919 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:01.336481 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:01.336559 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:01.336870 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:01.336917 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:01.836269 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:01.836349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:01.836651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:02.336510 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:02.336585 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:02.336875 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:02.836559 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:02.836633 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:02.836887 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:03.336237 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:03.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:03.336652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:03.836262 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:03.836343 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:03.836681 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:03.836743 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:04.336178 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:04.336263 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:04.336579 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:04.836312 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:04.836395 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:04.836768 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:05.336328 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:05.336405 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:05.336722 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:05.836169 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:05.836249 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:05.836532 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:06.337061 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:06.337133 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:06.337448 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:06.337510 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:06.836170 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:06.836312 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:06.836648 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:07.336505 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:07.336579 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:07.336834 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:07.836162 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:07.836243 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:07.836604 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:08.336414 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:08.336492 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:08.336833 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:08.836389 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:08.836459 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:08.836768 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:08.836825 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:09.336249 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:09.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:09.336671 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:09.836385 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:09.836463 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:09.836810 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:10.336515 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:10.336589 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:10.336857 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:10.836237 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:10.836310 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:10.836668 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:11.336409 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:11.336502 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:11.336898 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:11.336954 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:11.836193 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:11.836273 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:11.836553 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:12.336497 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:12.336582 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:12.336916 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:12.836273 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:12.836346 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:12.836677 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:13.336363 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:13.336435 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:13.336727 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:13.836260 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:13.836336 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:13.836645 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:13.836693 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:14.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:14.336320 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:14.336647 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:14.836180 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:14.836273 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:14.836579 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:15.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:15.336342 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:15.336712 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:15.836446 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:15.836528 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:15.836857 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:15.836911 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:16.336886 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:16.336953 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:16.337211 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:16.836970 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:16.837043 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:16.837375 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:17.336898 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:17.336971 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:17.337298 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:17.837031 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:17.837110 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:17.837392 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:17.837435 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:18.336966 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:18.337049 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:18.337376 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:18.837166 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:18.837253 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:18.837689 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:19.336193 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:19.336283 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:19.336617 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:19.836252 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:19.836332 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:19.836666 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:20.336399 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:20.336476 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:20.336824 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:20.336877 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:20.836528 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:20.836607 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:20.836879 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:21.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:21.336319 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:21.336627 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:21.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:21.836331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:21.836682 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:22.336425 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:22.336492 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:22.336751 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:22.836221 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:22.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:22.836646 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:22.836701 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:23.336413 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:23.336491 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:23.336832 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:23.836195 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:23.836282 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:23.836590 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:24.336251 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:24.336331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:24.336671 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:24.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:24.836325 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:24.836645 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:25.336331 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:25.336403 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:25.336743 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:25.336792 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:25.836245 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:25.836324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:25.836644 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:26.336605 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:26.336680 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:26.337038 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:26.836509 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:26.836578 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:26.836824 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:27.336452 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:27.336525 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:27.336887 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:27.336942 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:27.836486 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:27.836568 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:27.836917 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:28.336112 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:28.336186 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:28.336563 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:28.836282 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:28.836357 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:28.836713 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:29.336309 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:29.336384 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:29.336723 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:29.836403 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:29.836478 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:29.836733 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:29.836776 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:30.336220 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:30.336298 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:30.336637 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:30.836357 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:30.836431 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:30.836763 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:31.336439 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:31.336532 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:31.336820 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:31.836503 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:31.836582 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:31.836898 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:31.836954 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:32.336893 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:32.336969 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:32.337280 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:32.837017 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:32.837102 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:32.837392 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:33.336206 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:33.336289 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:33.336624 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:33.836226 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:33.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:33.836661 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:34.336143 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:34.336223 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:34.336515 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:34.336566 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:34.836223 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:34.836309 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:34.836660 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:35.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:35.336337 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:35.336768 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:35.836351 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:35.836427 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:35.836683 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:36.336777 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:36.336851 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:36.337168 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:36.337222 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:36.837003 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:36.837084 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:36.837449 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:37.336375 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:37.336445 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:37.336691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:37.836402 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:37.836479 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:37.836826 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:38.336440 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:38.336525 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:38.336860 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:38.836259 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:38.836339 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:38.836606 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:38.836659 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:39.336506 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:39.336596 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:39.337235 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:39.836335 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:39.836421 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:39.836794 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:40.336522 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:40.336587 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:40.336888 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:40.836592 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:40.836674 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:40.837021 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:40.837076 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:41.336578 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:41.336655 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:41.336975 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:41.836525 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:41.836604 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:41.836959 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:42.336767 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:42.336851 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:42.337172 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:42.836977 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:42.837055 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:42.837406 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:42.837463 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:43.336096 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:43.336165 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:43.336522 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:43.836216 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:43.836315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:43.836677 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:44.336275 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:44.336366 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:44.336718 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:44.836181 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:44.836246 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:44.836531 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:45.336294 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:45.336384 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:45.336759 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:45.336815 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:45.836495 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:45.836571 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:45.836902 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:46.336923 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:46.336991 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:46.337248 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:46.836581 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:46.836658 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:46.836955 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:47.336876 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:47.336959 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:47.337291 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:47.337349 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:47.837127 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:47.837195 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:47.837512 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:48.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:48.336347 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:48.336704 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:48.836213 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:48.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:48.836647 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:49.336183 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:49.336258 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:49.336584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:49.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:49.836330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:49.836652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:49.836707 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:50.336396 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:50.336475 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:50.336802 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:50.836181 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:50.836252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:50.836524 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:51.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:51.336340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:51.336661 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:51.836254 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:51.836339 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:51.836673 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:51.836731 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:52.336439 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:52.336508 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:52.336813 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:52.836552 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:52.836646 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:52.837037 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:53.336867 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:53.336943 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:53.337248 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:53.836529 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:53.836600 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:53.836882 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:53.836925 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:54.336730 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:54.336804 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:54.337142 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:54.836954 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:54.837030 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:54.837375 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:55.337104 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:55.337186 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:55.337475 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:55.836190 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:55.836268 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:55.836616 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:56.336432 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:56.336515 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:56.336847 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:56.336900 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:56.836181 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:56.836260 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:56.836553 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:57.336500 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:57.336575 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:57.336934 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:57.836737 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:57.836827 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:57.837184 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:58.336550 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:58.336646 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:58.336966 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:58.337018 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:58.836741 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:58.836828 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:58.837162 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:59.336945 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:59.337026 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:59.337378 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:59.836973 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:59.837043 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:59.837302 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:00.337185 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:00.337285 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:00.337926 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:00.338025 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:00.836242 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:00.836326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:00.836691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:01.336224 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:01.336316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:01.336589 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:01.836224 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:01.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:01.836636 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:02.336530 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:02.336607 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:02.336904 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:02.836600 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:02.836677 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:02.837015 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:02.837082 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:03.336835 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:03.336910 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:03.337276 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:03.837094 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:03.837170 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:03.837522 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:04.336212 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:04.336283 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:04.336559 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:04.836246 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:04.836354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:04.836699 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:05.336254 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:05.336341 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:05.336691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:05.336745 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:05.836209 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:05.836288 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:05.836622 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:06.336695 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:06.336783 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:06.337108 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:06.836892 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:06.836966 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:06.837308 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:07.336123 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:07.336192 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:07.336465 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:07.836757 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:07.836832 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:07.837160 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:07.837217 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:08.336959 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:08.337035 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:08.337354 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:08.837047 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:08.837117 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:08.837375 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:09.336797 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:09.336876 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:09.337176 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:09.836976 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:09.837060 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:09.837357 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:09.837405 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:10.337145 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:10.337219 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:10.337522 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:10.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:10.836324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:10.836624 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:11.336249 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:11.336335 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:11.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:11.836251 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:11.836329 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:11.836584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:12.336557 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:12.336629 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:12.336964 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:12.337021 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:12.836792 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:12.836867 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:12.837180 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:13.336499 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:13.336580 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:13.336912 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:13.836538 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:13.836617 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:13.836932 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:14.336207 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:14.336299 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:14.336627 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:14.836329 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:14.836410 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:14.836729 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:14.836786 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:15.336238 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:15.336371 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:15.336658 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:15.836347 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:15.836425 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:15.836765 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:16.336570 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:16.336641 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:16.336896 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:16.836234 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:16.836313 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:16.836636 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:17.336499 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:17.336578 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:17.336890 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:17.336950 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:17.836161 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:17.836245 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:17.836561 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:18.336250 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:18.336345 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:18.336673 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:18.836422 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:18.836499 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:18.836856 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:19.336539 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:19.336609 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:19.336871 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:19.836236 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:19.836313 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:19.836657 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:19.836712 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:20.336398 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:20.336479 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:20.336829 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:20.836244 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:20.836336 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:20.836642 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:21.336253 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:21.336338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:21.336707 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:21.836309 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:21.836398 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:21.836758 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:21.836814 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:22.336543 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:22.336624 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:22.336925 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:22.836625 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:22.836707 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:22.837057 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:23.336641 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:23.336724 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:23.337073 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:23.836556 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:23.836634 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:23.836903 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:23.836945 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:24.336275 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:24.336357 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:24.336645 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:24.836238 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:24.836349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:24.836732 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:25.336455 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:25.336529 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:25.336850 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:25.836272 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:25.836354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:25.836679 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:26.336762 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:26.336843 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:26.337194 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:26.337248 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:26.836530 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:26.836634 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:26.836949 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:27.337082 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:27.337168 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:27.337523 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:27.836265 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:27.836347 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:27.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:28.336192 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:28.336265 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:28.336562 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:28.836214 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:28.836315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:28.836676 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:28.836744 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:29.336414 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:29.336563 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:29.336947 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:29.836198 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:29.836288 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:29.836614 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:30.336242 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:30.336318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:30.336656 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:30.836210 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:30.836286 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:30.836612 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:31.336280 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:31.336362 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:31.336639 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:31.336684 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:31.836265 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:31.836343 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:31.836692 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:32.336488 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:32.336567 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:32.336863 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:32.836173 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:32.836265 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:32.836578 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:33.336248 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:33.336322 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:33.336687 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:33.336748 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:33.836273 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:33.836362 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:33.836704 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:34.336406 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:34.336478 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:34.336748 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:34.836467 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:34.836551 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:34.836857 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:35.336588 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:35.336668 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:35.337027 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:35.337086 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:35.836514 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:35.836585 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:35.836913 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:36.336967 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:36.337041 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:36.337376 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:36.837202 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:36.837285 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:36.837584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:37.336502 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:37.336591 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:37.336896 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:37.836591 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:37.836694 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:37.837046 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:37.837115 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:38.336899 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:38.336971 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:38.337328 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:38.837050 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:38.837126 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:38.837404 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:39.336160 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:39.336232 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:39.336580 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:39.836289 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:39.836382 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:39.836703 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:40.336371 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:40.336443 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:40.336707 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:40.336759 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:40.836239 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:40.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:40.836655 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:41.336240 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:41.336320 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:41.336686 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:41.836231 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:41.836305 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:41.836611 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:42.336623 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:42.336717 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:42.337080 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:42.337132 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:42.836776 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:42.836862 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:42.837178 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:43.336512 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:43.336586 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:43.336846 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:43.836233 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:43.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:43.836685 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:44.336266 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:44.336339 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:44.336641 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:44.836273 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:44.836339 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:44.836623 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:44.836676 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:45.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:45.336326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:45.336676 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:45.836517 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:45.836597 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:45.836920 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:46.336894 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:46.336967 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:46.337224 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:46.837014 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:46.837094 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:46.837437 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:46.837490 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:47.336253 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:47.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:47.336670 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:47.836235 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:47.836302 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:47.836610 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:48.336235 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:48.336318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:48.336671 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:48.836257 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:48.836337 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:48.836678 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:49.336349 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:49.336431 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:49.336732 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:49.336821 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:49.836214 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:49.836293 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:49.836634 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:50.336226 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:50.336307 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:50.336635 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:50.836333 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:50.836410 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:50.836688 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:51.336241 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:51.336326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:51.336678 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:51.836396 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:51.836479 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:51.836771 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:51.836817 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:52.336516 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:52.336593 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:52.336852 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:52.836252 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:52.836340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:52.836773 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:53.336515 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:53.336595 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:53.336935 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:53.836510 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:53.836587 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:53.836851 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:53.836896 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:54.336367 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:54.336467 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:54.336808 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:54.836267 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:54.836343 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:54.836686 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:55.336171 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:55.336242 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:55.336562 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:55.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:55.836320 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:55.836689 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:56.336624 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:56.336725 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:56.337092 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:56.337153 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:56.836464 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:56.836539 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:56.836794 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:57.336513 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:57.336595 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:57.336897 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:57.836221 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:57.836300 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:57.836664 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:58.336100 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:58.336175 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:58.336496 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:58.836220 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:58.836305 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:58.836646 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:58.836706 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:59.336458 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:59.336535 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:59.336905 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:59.836288 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:59.836366 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:59.836722 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:00.336435 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:00.336516 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:00.336842 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:00.836803 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:00.836881 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:00.837232 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:00.837290 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:01.336546 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:01.336620 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:01.336919 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:01.836631 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:01.836717 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:01.837061 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:02.336921 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:02.337000 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:02.337379 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:02.837188 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:02.837257 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:02.837522 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:02.837565 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:03.336219 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:03.336301 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:03.336654 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:03.836248 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:03.836325 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:03.836635 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:04.336178 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:04.336251 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:04.336567 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:04.836232 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:04.836318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:04.836669 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:05.336242 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:05.336317 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:05.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:05.336713 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:05.836366 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:05.836448 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:05.836735 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:06.336637 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:06.336720 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:06.337074 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:06.836743 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:06.836817 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:06.837172 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:07.336998 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:07.337074 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:07.337343 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:07.337395 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:07.837167 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:07.837242 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:07.837584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:08.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:08.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:08.336647 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:08.836178 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:08.836256 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:08.836598 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:09.336224 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:09.336297 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:09.336632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:09.836244 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:09.836321 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:09.836675 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:09.836731 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:10.336173 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:10.336248 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:10.336521 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:10.836203 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:10.836283 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:10.836642 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:11.336345 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:11.336437 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:11.336802 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:11.836493 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:11.836567 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:11.836846 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:11.836897 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:12.336745 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:12.336822 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:12.337164 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:12.836822 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:12.836903 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:12.837329 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:13.337068 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:13.337137 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:13.337477 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:13.836207 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:13.836286 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:13.836668 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:14.336241 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:14.336327 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:14.336632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:14.336679 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:14.836300 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:14.836375 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:14.836649 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:15.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:15.336332 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:15.336669 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:15.836251 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:15.836333 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:15.836683 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:16.336651 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:16.336729 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:16.337093 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:16.337145 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:16.836909 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:16.836992 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:16.837356 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:17.336137 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:17.336212 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:17.336571 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:17.836247 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:17.836315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:17.836610 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:18.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:18.336326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:18.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:18.836230 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:18.836305 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:18.836647 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:18.836705 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:19.336364 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:19.336454 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:19.336797 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:19.836215 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:19.836288 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:19.836625 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:20.336325 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:20.336403 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:20.336754 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:20.836274 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:20.836352 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:20.836687 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:20.836744 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:21.336273 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:21.336350 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:21.336700 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:21.836398 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:21.836482 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:21.836816 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:22.336510 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:22.336583 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:22.336841 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:22.836211 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:22.836292 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:22.836650 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:23.336238 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:23.336314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:23.336696 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:23.336754 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:23.836429 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:23.836502 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:23.836789 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:24.336496 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:24.336580 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:24.336961 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:24.836574 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:24.836650 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:24.836988 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:25.336500 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:25.336566 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:25.336817 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:25.336861 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:25.836628 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:25.836709 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:25.837047 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:26.337039 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:26.337121 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:26.337470 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:26.836162 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:26.836244 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:26.836581 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:27.336591 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:27.336672 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:27.337011 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:27.337065 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:27.836601 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:27.836681 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:27.837000 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:28.336523 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:28.336604 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:28.336874 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:28.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:28.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:28.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:29.336385 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:29.336497 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:29.336839 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:29.836183 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:29.836257 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:29.836558 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:29.836608 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:30.336289 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:30.336367 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:30.336681 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:30.836223 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:30.836310 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:30.836632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:31.336179 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:31.336247 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:31.336520 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:31.836214 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:31.836288 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:31.836631 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:31.836685 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:32.336406 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:32.336490 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:32.336839 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:32.836185 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:32.836252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:32.836552 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:33.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:33.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:33.336778 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:33.836367 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:33.836492 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:33.836841 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:33.836899 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:34.336602 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:34.336672 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:34.336962 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:34.836466 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:34.836542 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:34.836843 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:35.336251 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:35.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:35.336690 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:35.836215 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:35.836283 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:35.836600 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:36.336641 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:36.336716 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:36.337095 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:36.337155 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:36.836776 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:36.836857 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:36.837203 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:37.337030 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:37.337151 1297065 node_ready.go:38] duration metric: took 6m0.001157945s for node "functional-562018" to be "Ready" ...
	I1213 14:55:37.340291 1297065 out.go:203] 
	W1213 14:55:37.343143 1297065 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 14:55:37.343162 1297065 out.go:285] * 
	W1213 14:55:37.345311 1297065 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 14:55:37.348302 1297065 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.839061081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.839082069Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.839142982Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.839165489Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.839181579Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.839196856Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.839210362Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.839227009Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.839247973Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.839286208Z" level=info msg="Connect containerd service"
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.839634951Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.840751317Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.850265604Z" level=info msg="Start subscribing containerd event"
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.850350033Z" level=info msg="Start recovering state"
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.850594999Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.850703108Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.886139866Z" level=info msg="Start event monitor"
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.886335201Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.886398699Z" level=info msg="Start streaming server"
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.886467719Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.886526179Z" level=info msg="runtime interface starting up..."
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.886580873Z" level=info msg="starting plugins..."
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.886640704Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 14:49:34 functional-562018 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.893206436Z" level=info msg="containerd successfully booted in 0.076868s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:55:39.109145    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:55:39.109888    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:55:39.111478    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:55:39.111883    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:55:39.113311    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.759368] overlayfs: idmapped layers are currently not supported
	[Dec13 13:37] overlayfs: idmapped layers are currently not supported
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 14:55:39 up  6:38,  0 user,  load average: 0.13, 0.26, 0.75
	Linux functional-562018 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 14:55:35 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 14:55:36 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 808.
	Dec 13 14:55:36 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:36 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:36 functional-562018 kubelet[8346]: E1213 14:55:36.626660    8346 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 14:55:36 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 14:55:36 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 14:55:37 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 809.
	Dec 13 14:55:37 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:37 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:37 functional-562018 kubelet[8352]: E1213 14:55:37.429006    8352 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 14:55:37 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 14:55:37 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 14:55:38 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 13 14:55:38 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:38 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:38 functional-562018 kubelet[8359]: E1213 14:55:38.137307    8359 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 14:55:38 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 14:55:38 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 14:55:38 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 13 14:55:38 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:38 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:38 functional-562018 kubelet[8398]: E1213 14:55:38.897106    8398 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 14:55:38 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 14:55:38 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018: exit status 2 (364.627316ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-562018" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (367.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-562018 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-562018 get po -A: exit status 1 (65.575865ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-562018 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-562018 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-562018 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-562018
helpers_test.go:244: (dbg) docker inspect functional-562018:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
	        "Created": "2025-12-13T14:41:15.451086653Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1291703,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T14:41:15.527927053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hostname",
	        "HostsPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hosts",
	        "LogPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648-json.log",
	        "Name": "/functional-562018",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-562018:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-562018",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
	                "LowerDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-562018",
	                "Source": "/var/lib/docker/volumes/functional-562018/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-562018",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-562018",
	                "name.minikube.sigs.k8s.io": "functional-562018",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f4b22297a29553cdd0dbc4eaa766abcdb1e67465ee18f1e2f5f9917dc8cf6d08",
	            "SandboxKey": "/var/run/docker/netns/f4b22297a295",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33918"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33919"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33922"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33920"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33921"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-562018": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:f3:95:ff:30:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd2e7aed753870546b4bccdeed8073ad5795c14334e906d06b964f72dc448c38",
	                    "EndpointID": "a0e947c1e40f773105c811b67b7d1d63f19d3a20060380bbde944bf9bfe39be5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-562018",
	                        "2cd1277ca783"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018: exit status 2 (301.049716ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-831661 ssh sudo cat /etc/ssl/certs/12529342.pem                                                                                                      │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:40 UTC │
	│ image          │ functional-831661 image ls                                                                                                                                      │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:40 UTC │
	│ ssh            │ functional-831661 ssh sudo cat /usr/share/ca-certificates/12529342.pem                                                                                          │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:40 UTC │
	│ ssh            │ functional-831661 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image load --daemon kicbase/echo-server:functional-831661 --alsologtostderr                                                                   │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image ls                                                                                                                                      │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image save kicbase/echo-server:functional-831661 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image rm kicbase/echo-server:functional-831661 --alsologtostderr                                                                              │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ update-context │ functional-831661 update-context --alsologtostderr -v=2                                                                                                         │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image ls                                                                                                                                      │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ update-context │ functional-831661 update-context --alsologtostderr -v=2                                                                                                         │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ update-context │ functional-831661 update-context --alsologtostderr -v=2                                                                                                         │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image ls                                                                                                                                      │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image save --daemon kicbase/echo-server:functional-831661 --alsologtostderr                                                                   │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image ls --format json --alsologtostderr                                                                                                      │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image ls --format short --alsologtostderr                                                                                                     │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image ls --format table --alsologtostderr                                                                                                     │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ ssh            │ functional-831661 ssh pgrep buildkitd                                                                                                                           │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │                     │
	│ image          │ functional-831661 image ls --format yaml --alsologtostderr                                                                                                      │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image build -t localhost/my-image:functional-831661 testdata/build --alsologtostderr                                                          │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image          │ functional-831661 image ls                                                                                                                                      │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ delete         │ -p functional-831661                                                                                                                                            │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ start          │ -p functional-562018 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0         │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │                     │
	│ start          │ -p functional-562018 --alsologtostderr -v=8                                                                                                                     │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:49 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 14:49:32
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 14:49:32.175934 1297065 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:49:32.176062 1297065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:49:32.176074 1297065 out.go:374] Setting ErrFile to fd 2...
	I1213 14:49:32.176081 1297065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:49:32.176329 1297065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 14:49:32.176775 1297065 out.go:368] Setting JSON to false
	I1213 14:49:32.177662 1297065 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23521,"bootTime":1765613851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 14:49:32.177756 1297065 start.go:143] virtualization:  
	I1213 14:49:32.181250 1297065 out.go:179] * [functional-562018] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 14:49:32.184279 1297065 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 14:49:32.184349 1297065 notify.go:221] Checking for updates...
	I1213 14:49:32.190681 1297065 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:49:32.193733 1297065 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:32.196589 1297065 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 14:49:32.199444 1297065 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 14:49:32.202364 1297065 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 14:49:32.205680 1297065 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:49:32.205788 1297065 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:49:32.233101 1297065 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 14:49:32.233224 1297065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:49:32.299716 1297065 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 14:49:32.290425951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:49:32.299832 1297065 docker.go:319] overlay module found
	I1213 14:49:32.305094 1297065 out.go:179] * Using the docker driver based on existing profile
	I1213 14:49:32.307726 1297065 start.go:309] selected driver: docker
	I1213 14:49:32.307744 1297065 start.go:927] validating driver "docker" against &{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:49:32.307856 1297065 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 14:49:32.307958 1297065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:49:32.364202 1297065 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 14:49:32.354888078 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:49:32.364608 1297065 cni.go:84] Creating CNI manager for ""
	I1213 14:49:32.364673 1297065 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 14:49:32.364721 1297065 start.go:353] cluster config:
	{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:49:32.367887 1297065 out.go:179] * Starting "functional-562018" primary control-plane node in "functional-562018" cluster
	I1213 14:49:32.370579 1297065 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 14:49:32.373599 1297065 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 14:49:32.376553 1297065 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 14:49:32.376606 1297065 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 14:49:32.376621 1297065 cache.go:65] Caching tarball of preloaded images
	I1213 14:49:32.376630 1297065 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 14:49:32.376703 1297065 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 14:49:32.376713 1297065 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 14:49:32.376820 1297065 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/config.json ...
	I1213 14:49:32.396105 1297065 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 14:49:32.396128 1297065 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 14:49:32.396160 1297065 cache.go:243] Successfully downloaded all kic artifacts
	I1213 14:49:32.396191 1297065 start.go:360] acquireMachinesLock for functional-562018: {Name:mk6a7956e4fce5d8e0f4d6fe039ab67ad6cd688b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 14:49:32.396254 1297065 start.go:364] duration metric: took 40.319µs to acquireMachinesLock for "functional-562018"
	I1213 14:49:32.396277 1297065 start.go:96] Skipping create...Using existing machine configuration
	I1213 14:49:32.396287 1297065 fix.go:54] fixHost starting: 
	I1213 14:49:32.396543 1297065 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:49:32.413077 1297065 fix.go:112] recreateIfNeeded on functional-562018: state=Running err=<nil>
	W1213 14:49:32.413105 1297065 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 14:49:32.416298 1297065 out.go:252] * Updating the running docker "functional-562018" container ...
	I1213 14:49:32.416337 1297065 machine.go:94] provisionDockerMachine start ...
	I1213 14:49:32.416434 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:32.434428 1297065 main.go:143] libmachine: Using SSH client type: native
	I1213 14:49:32.434755 1297065 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:49:32.434764 1297065 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 14:49:32.588560 1297065 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-562018
	
	I1213 14:49:32.588587 1297065 ubuntu.go:182] provisioning hostname "functional-562018"
	I1213 14:49:32.588651 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:32.607983 1297065 main.go:143] libmachine: Using SSH client type: native
	I1213 14:49:32.608286 1297065 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:49:32.608297 1297065 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-562018 && echo "functional-562018" | sudo tee /etc/hostname
	I1213 14:49:32.769183 1297065 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-562018
	
	I1213 14:49:32.769274 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:32.789428 1297065 main.go:143] libmachine: Using SSH client type: native
	I1213 14:49:32.789750 1297065 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:49:32.789773 1297065 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-562018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-562018/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-562018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 14:49:32.943886 1297065 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 14:49:32.943914 1297065 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 14:49:32.943934 1297065 ubuntu.go:190] setting up certificates
	I1213 14:49:32.943953 1297065 provision.go:84] configureAuth start
	I1213 14:49:32.944016 1297065 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
	I1213 14:49:32.962011 1297065 provision.go:143] copyHostCerts
	I1213 14:49:32.962065 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 14:49:32.962109 1297065 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 14:49:32.962123 1297065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 14:49:32.962204 1297065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 14:49:32.962309 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 14:49:32.962331 1297065 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 14:49:32.962339 1297065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 14:49:32.962367 1297065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 14:49:32.962422 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 14:49:32.962443 1297065 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 14:49:32.962451 1297065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 14:49:32.962476 1297065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 14:49:32.962539 1297065 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.functional-562018 san=[127.0.0.1 192.168.49.2 functional-562018 localhost minikube]
	I1213 14:49:33.179564 1297065 provision.go:177] copyRemoteCerts
	I1213 14:49:33.179638 1297065 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 14:49:33.179690 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.200012 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.307268 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 14:49:33.307352 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 14:49:33.325080 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 14:49:33.325187 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 14:49:33.348055 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 14:49:33.348124 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 14:49:33.368733 1297065 provision.go:87] duration metric: took 424.756928ms to configureAuth
	I1213 14:49:33.368776 1297065 ubuntu.go:206] setting minikube options for container-runtime
	I1213 14:49:33.368958 1297065 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:49:33.368972 1297065 machine.go:97] duration metric: took 952.628419ms to provisionDockerMachine
	I1213 14:49:33.368979 1297065 start.go:293] postStartSetup for "functional-562018" (driver="docker")
	I1213 14:49:33.368990 1297065 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 14:49:33.369043 1297065 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 14:49:33.369100 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.388800 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.495227 1297065 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 14:49:33.498339 1297065 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 14:49:33.498360 1297065 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 14:49:33.498365 1297065 command_runner.go:130] > VERSION_ID="12"
	I1213 14:49:33.498369 1297065 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 14:49:33.498374 1297065 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 14:49:33.498378 1297065 command_runner.go:130] > ID=debian
	I1213 14:49:33.498382 1297065 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 14:49:33.498387 1297065 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 14:49:33.498400 1297065 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 14:49:33.498729 1297065 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 14:49:33.498752 1297065 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 14:49:33.498764 1297065 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 14:49:33.498818 1297065 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 14:49:33.498907 1297065 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 14:49:33.498914 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> /etc/ssl/certs/12529342.pem
	I1213 14:49:33.498991 1297065 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts -> hosts in /etc/test/nested/copy/1252934
	I1213 14:49:33.498996 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts -> /etc/test/nested/copy/1252934/hosts
	I1213 14:49:33.499038 1297065 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1252934
	I1213 14:49:33.506503 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 14:49:33.524063 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts --> /etc/test/nested/copy/1252934/hosts (40 bytes)
	I1213 14:49:33.542234 1297065 start.go:296] duration metric: took 173.238726ms for postStartSetup
	I1213 14:49:33.542347 1297065 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 14:49:33.542395 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.560689 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.668283 1297065 command_runner.go:130] > 18%
	I1213 14:49:33.668429 1297065 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 14:49:33.673015 1297065 command_runner.go:130] > 160G
	I1213 14:49:33.673516 1297065 fix.go:56] duration metric: took 1.277224674s for fixHost
	I1213 14:49:33.673545 1297065 start.go:83] releasing machines lock for "functional-562018", held for 1.277279647s
	I1213 14:49:33.673651 1297065 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
	I1213 14:49:33.691077 1297065 ssh_runner.go:195] Run: cat /version.json
	I1213 14:49:33.691140 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.691468 1297065 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 14:49:33.691538 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.709148 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.719417 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.814811 1297065 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 14:49:33.814943 1297065 ssh_runner.go:195] Run: systemctl --version
	I1213 14:49:33.903672 1297065 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 14:49:33.906947 1297065 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 14:49:33.906982 1297065 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 14:49:33.907055 1297065 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 14:49:33.911546 1297065 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 14:49:33.911590 1297065 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 14:49:33.911661 1297065 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 14:49:33.919539 1297065 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 14:49:33.919560 1297065 start.go:496] detecting cgroup driver to use...
	I1213 14:49:33.919591 1297065 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 14:49:33.919652 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 14:49:33.935466 1297065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 14:49:33.948503 1297065 docker.go:218] disabling cri-docker service (if available) ...
	I1213 14:49:33.948565 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 14:49:33.964251 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 14:49:33.977532 1297065 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 14:49:34.098935 1297065 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 14:49:34.240532 1297065 docker.go:234] disabling docker service ...
	I1213 14:49:34.240643 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 14:49:34.257037 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 14:49:34.270650 1297065 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 14:49:34.390022 1297065 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 14:49:34.521564 1297065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 14:49:34.535848 1297065 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 14:49:34.549721 1297065 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1213 14:49:34.551043 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 14:49:34.560293 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 14:49:34.569539 1297065 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 14:49:34.569607 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 14:49:34.578725 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 14:49:34.587464 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 14:49:34.595867 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 14:49:34.604914 1297065 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 14:49:34.612837 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 14:49:34.621746 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 14:49:34.631405 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 14:49:34.640934 1297065 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 14:49:34.647949 1297065 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 14:49:34.649110 1297065 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 14:49:34.656959 1297065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:49:34.763520 1297065 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 14:49:34.891785 1297065 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 14:49:34.891886 1297065 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 14:49:34.896000 1297065 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1213 14:49:34.896045 1297065 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 14:49:34.896074 1297065 command_runner.go:130] > Device: 0,72	Inode: 1612        Links: 1
	I1213 14:49:34.896088 1297065 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 14:49:34.896099 1297065 command_runner.go:130] > Access: 2025-12-13 14:49:34.846323232 +0000
	I1213 14:49:34.896109 1297065 command_runner.go:130] > Modify: 2025-12-13 14:49:34.846323232 +0000
	I1213 14:49:34.896114 1297065 command_runner.go:130] > Change: 2025-12-13 14:49:34.846323232 +0000
	I1213 14:49:34.896117 1297065 command_runner.go:130] >  Birth: -
	I1213 14:49:34.896860 1297065 start.go:564] Will wait 60s for crictl version
	I1213 14:49:34.896947 1297065 ssh_runner.go:195] Run: which crictl
	I1213 14:49:34.901248 1297065 command_runner.go:130] > /usr/local/bin/crictl
	I1213 14:49:34.901933 1297065 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 14:49:34.925912 1297065 command_runner.go:130] > Version:  0.1.0
	I1213 14:49:34.925937 1297065 command_runner.go:130] > RuntimeName:  containerd
	I1213 14:49:34.925943 1297065 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1213 14:49:34.925948 1297065 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 14:49:34.928438 1297065 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 14:49:34.928554 1297065 ssh_runner.go:195] Run: containerd --version
	I1213 14:49:34.949487 1297065 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 14:49:34.951799 1297065 ssh_runner.go:195] Run: containerd --version
	I1213 14:49:34.970090 1297065 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 14:49:34.977895 1297065 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 14:49:34.980777 1297065 cli_runner.go:164] Run: docker network inspect functional-562018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 14:49:34.997091 1297065 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 14:49:35.003196 1297065 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 14:49:35.003415 1297065 kubeadm.go:884] updating cluster {Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 14:49:35.003575 1297065 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 14:49:35.003657 1297065 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:49:35.028469 1297065 command_runner.go:130] > {
	I1213 14:49:35.028488 1297065 command_runner.go:130] >   "images":  [
	I1213 14:49:35.028493 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028502 1297065 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 14:49:35.028509 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028514 1297065 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 14:49:35.028518 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028522 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028533 1297065 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 14:49:35.028536 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028541 1297065 command_runner.go:130] >       "size":  "40636774",
	I1213 14:49:35.028545 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028549 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028552 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028555 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028563 1297065 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 14:49:35.028567 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028572 1297065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 14:49:35.028574 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028583 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028592 1297065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 14:49:35.028595 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028599 1297065 command_runner.go:130] >       "size":  "8034419",
	I1213 14:49:35.028603 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028607 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028610 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028613 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028620 1297065 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 14:49:35.028624 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028630 1297065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 14:49:35.028633 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028641 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028649 1297065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 14:49:35.028652 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028656 1297065 command_runner.go:130] >       "size":  "21168808",
	I1213 14:49:35.028660 1297065 command_runner.go:130] >       "username":  "nonroot",
	I1213 14:49:35.028664 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028667 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028670 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028677 1297065 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 14:49:35.028680 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028685 1297065 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 14:49:35.028688 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028691 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028698 1297065 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 14:49:35.028701 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028706 1297065 command_runner.go:130] >       "size":  "21136588",
	I1213 14:49:35.028710 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.028714 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.028717 1297065 command_runner.go:130] >       },
	I1213 14:49:35.028721 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028725 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028731 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028734 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028741 1297065 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 14:49:35.028745 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028750 1297065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 14:49:35.028753 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028757 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028764 1297065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 14:49:35.028768 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028772 1297065 command_runner.go:130] >       "size":  "24678359",
	I1213 14:49:35.028775 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.028783 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.028786 1297065 command_runner.go:130] >       },
	I1213 14:49:35.028790 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028794 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028797 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028799 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028806 1297065 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 14:49:35.028809 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028815 1297065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 14:49:35.028818 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028822 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028829 1297065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 14:49:35.028833 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028837 1297065 command_runner.go:130] >       "size":  "20661043",
	I1213 14:49:35.028841 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.028844 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.028847 1297065 command_runner.go:130] >       },
	I1213 14:49:35.028852 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028855 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028858 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028861 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028867 1297065 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 14:49:35.028877 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028883 1297065 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 14:49:35.028886 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028890 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028897 1297065 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 14:49:35.028900 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028905 1297065 command_runner.go:130] >       "size":  "22429671",
	I1213 14:49:35.028908 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028912 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028915 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028919 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028926 1297065 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 14:49:35.028929 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028934 1297065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 14:49:35.028937 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028941 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028948 1297065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 14:49:35.028951 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028955 1297065 command_runner.go:130] >       "size":  "15391364",
	I1213 14:49:35.028959 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.028962 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.028965 1297065 command_runner.go:130] >       },
	I1213 14:49:35.028969 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028972 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028975 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028978 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028984 1297065 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 14:49:35.028987 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028992 1297065 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 14:49:35.028995 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028998 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.029005 1297065 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 14:49:35.029009 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.029016 1297065 command_runner.go:130] >       "size":  "267939",
	I1213 14:49:35.029019 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.029023 1297065 command_runner.go:130] >         "value":  "65535"
	I1213 14:49:35.029030 1297065 command_runner.go:130] >       },
	I1213 14:49:35.029034 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.029037 1297065 command_runner.go:130] >       "pinned":  true
	I1213 14:49:35.029040 1297065 command_runner.go:130] >     }
	I1213 14:49:35.029043 1297065 command_runner.go:130] >   ]
	I1213 14:49:35.029046 1297065 command_runner.go:130] > }
	I1213 14:49:35.031562 1297065 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 14:49:35.031587 1297065 containerd.go:534] Images already preloaded, skipping extraction
	I1213 14:49:35.031647 1297065 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:49:35.054892 1297065 command_runner.go:130] > {
	I1213 14:49:35.054913 1297065 command_runner.go:130] >   "images":  [
	I1213 14:49:35.054918 1297065 command_runner.go:130] >     {
	I1213 14:49:35.054928 1297065 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 14:49:35.054933 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.054939 1297065 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 14:49:35.054943 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.054947 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.054959 1297065 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 14:49:35.054966 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.054970 1297065 command_runner.go:130] >       "size":  "40636774",
	I1213 14:49:35.054977 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.054982 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.054993 1297065 command_runner.go:130] >     },
	I1213 14:49:35.054996 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055014 1297065 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 14:49:35.055021 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055030 1297065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 14:49:35.055033 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055037 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055045 1297065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 14:49:35.055049 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055053 1297065 command_runner.go:130] >       "size":  "8034419",
	I1213 14:49:35.055057 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055060 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055064 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055067 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055074 1297065 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 14:49:35.055081 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055086 1297065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 14:49:35.055092 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055104 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055117 1297065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 14:49:35.055121 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055125 1297065 command_runner.go:130] >       "size":  "21168808",
	I1213 14:49:35.055135 1297065 command_runner.go:130] >       "username":  "nonroot",
	I1213 14:49:35.055139 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055143 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055151 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055158 1297065 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 14:49:35.055162 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055169 1297065 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 14:49:35.055173 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055177 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055187 1297065 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 14:49:35.055193 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055201 1297065 command_runner.go:130] >       "size":  "21136588",
	I1213 14:49:35.055205 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055210 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.055217 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055221 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055225 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055231 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055234 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055241 1297065 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 14:49:35.055246 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055254 1297065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 14:49:35.055257 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055261 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055272 1297065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 14:49:35.055278 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055283 1297065 command_runner.go:130] >       "size":  "24678359",
	I1213 14:49:35.055286 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055294 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.055300 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055304 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055329 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055335 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055339 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055346 1297065 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 14:49:35.055352 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055358 1297065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 14:49:35.055371 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055375 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055383 1297065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 14:49:35.055388 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055392 1297065 command_runner.go:130] >       "size":  "20661043",
	I1213 14:49:35.055399 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055403 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.055410 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055415 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055422 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055425 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055428 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055435 1297065 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 14:49:35.055446 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055452 1297065 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 14:49:35.055455 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055460 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055469 1297065 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 14:49:35.055477 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055482 1297065 command_runner.go:130] >       "size":  "22429671",
	I1213 14:49:35.055486 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055494 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055497 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055500 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055511 1297065 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 14:49:35.055515 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055524 1297065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 14:49:35.055529 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055533 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055541 1297065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 14:49:35.055547 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055551 1297065 command_runner.go:130] >       "size":  "15391364",
	I1213 14:49:35.055554 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055559 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.055564 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055568 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055574 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055578 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055581 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055587 1297065 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 14:49:35.055595 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055602 1297065 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 14:49:35.055608 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055612 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055620 1297065 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 14:49:35.055626 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055630 1297065 command_runner.go:130] >       "size":  "267939",
	I1213 14:49:35.055633 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055637 1297065 command_runner.go:130] >         "value":  "65535"
	I1213 14:49:35.055651 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055655 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055659 1297065 command_runner.go:130] >       "pinned":  true
	I1213 14:49:35.055662 1297065 command_runner.go:130] >     }
	I1213 14:49:35.055666 1297065 command_runner.go:130] >   ]
	I1213 14:49:35.055669 1297065 command_runner.go:130] > }
	I1213 14:49:35.057995 1297065 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 14:49:35.058021 1297065 cache_images.go:86] Images are preloaded, skipping loading
	I1213 14:49:35.058031 1297065 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 14:49:35.058154 1297065 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-562018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 14:49:35.058232 1297065 ssh_runner.go:195] Run: sudo crictl info
	I1213 14:49:35.082362 1297065 command_runner.go:130] > {
	I1213 14:49:35.082385 1297065 command_runner.go:130] >   "cniconfig": {
	I1213 14:49:35.082391 1297065 command_runner.go:130] >     "Networks": [
	I1213 14:49:35.082395 1297065 command_runner.go:130] >       {
	I1213 14:49:35.082401 1297065 command_runner.go:130] >         "Config": {
	I1213 14:49:35.082405 1297065 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1213 14:49:35.082411 1297065 command_runner.go:130] >           "Name": "cni-loopback",
	I1213 14:49:35.082415 1297065 command_runner.go:130] >           "Plugins": [
	I1213 14:49:35.082419 1297065 command_runner.go:130] >             {
	I1213 14:49:35.082423 1297065 command_runner.go:130] >               "Network": {
	I1213 14:49:35.082427 1297065 command_runner.go:130] >                 "ipam": {},
	I1213 14:49:35.082432 1297065 command_runner.go:130] >                 "type": "loopback"
	I1213 14:49:35.082436 1297065 command_runner.go:130] >               },
	I1213 14:49:35.082446 1297065 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1213 14:49:35.082450 1297065 command_runner.go:130] >             }
	I1213 14:49:35.082457 1297065 command_runner.go:130] >           ],
	I1213 14:49:35.082467 1297065 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1213 14:49:35.082473 1297065 command_runner.go:130] >         },
	I1213 14:49:35.082488 1297065 command_runner.go:130] >         "IFName": "lo"
	I1213 14:49:35.082495 1297065 command_runner.go:130] >       }
	I1213 14:49:35.082498 1297065 command_runner.go:130] >     ],
	I1213 14:49:35.082503 1297065 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1213 14:49:35.082507 1297065 command_runner.go:130] >     "PluginDirs": [
	I1213 14:49:35.082511 1297065 command_runner.go:130] >       "/opt/cni/bin"
	I1213 14:49:35.082516 1297065 command_runner.go:130] >     ],
	I1213 14:49:35.082520 1297065 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1213 14:49:35.082527 1297065 command_runner.go:130] >     "Prefix": "eth"
	I1213 14:49:35.082530 1297065 command_runner.go:130] >   },
	I1213 14:49:35.082533 1297065 command_runner.go:130] >   "config": {
	I1213 14:49:35.082537 1297065 command_runner.go:130] >     "cdiSpecDirs": [
	I1213 14:49:35.082544 1297065 command_runner.go:130] >       "/etc/cdi",
	I1213 14:49:35.082549 1297065 command_runner.go:130] >       "/var/run/cdi"
	I1213 14:49:35.082552 1297065 command_runner.go:130] >     ],
	I1213 14:49:35.082559 1297065 command_runner.go:130] >     "cni": {
	I1213 14:49:35.082562 1297065 command_runner.go:130] >       "binDir": "",
	I1213 14:49:35.082566 1297065 command_runner.go:130] >       "binDirs": [
	I1213 14:49:35.082570 1297065 command_runner.go:130] >         "/opt/cni/bin"
	I1213 14:49:35.082573 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.082578 1297065 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1213 14:49:35.082581 1297065 command_runner.go:130] >       "confTemplate": "",
	I1213 14:49:35.082586 1297065 command_runner.go:130] >       "ipPref": "",
	I1213 14:49:35.082589 1297065 command_runner.go:130] >       "maxConfNum": 1,
	I1213 14:49:35.082593 1297065 command_runner.go:130] >       "setupSerially": false,
	I1213 14:49:35.082601 1297065 command_runner.go:130] >       "useInternalLoopback": false
	I1213 14:49:35.082604 1297065 command_runner.go:130] >     },
	I1213 14:49:35.082611 1297065 command_runner.go:130] >     "containerd": {
	I1213 14:49:35.082617 1297065 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1213 14:49:35.082622 1297065 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1213 14:49:35.082629 1297065 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1213 14:49:35.082634 1297065 command_runner.go:130] >       "runtimes": {
	I1213 14:49:35.082637 1297065 command_runner.go:130] >         "runc": {
	I1213 14:49:35.082648 1297065 command_runner.go:130] >           "ContainerAnnotations": null,
	I1213 14:49:35.082654 1297065 command_runner.go:130] >           "PodAnnotations": null,
	I1213 14:49:35.082659 1297065 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1213 14:49:35.082672 1297065 command_runner.go:130] >           "cgroupWritable": false,
	I1213 14:49:35.082676 1297065 command_runner.go:130] >           "cniConfDir": "",
	I1213 14:49:35.082680 1297065 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1213 14:49:35.082684 1297065 command_runner.go:130] >           "io_type": "",
	I1213 14:49:35.082688 1297065 command_runner.go:130] >           "options": {
	I1213 14:49:35.082693 1297065 command_runner.go:130] >             "BinaryName": "",
	I1213 14:49:35.082699 1297065 command_runner.go:130] >             "CriuImagePath": "",
	I1213 14:49:35.082703 1297065 command_runner.go:130] >             "CriuWorkPath": "",
	I1213 14:49:35.082707 1297065 command_runner.go:130] >             "IoGid": 0,
	I1213 14:49:35.082714 1297065 command_runner.go:130] >             "IoUid": 0,
	I1213 14:49:35.082719 1297065 command_runner.go:130] >             "NoNewKeyring": false,
	I1213 14:49:35.082725 1297065 command_runner.go:130] >             "Root": "",
	I1213 14:49:35.082729 1297065 command_runner.go:130] >             "ShimCgroup": "",
	I1213 14:49:35.082743 1297065 command_runner.go:130] >             "SystemdCgroup": false
	I1213 14:49:35.082746 1297065 command_runner.go:130] >           },
	I1213 14:49:35.082751 1297065 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1213 14:49:35.082758 1297065 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1213 14:49:35.082765 1297065 command_runner.go:130] >           "runtimePath": "",
	I1213 14:49:35.082769 1297065 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1213 14:49:35.082774 1297065 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1213 14:49:35.082778 1297065 command_runner.go:130] >           "snapshotter": ""
	I1213 14:49:35.082784 1297065 command_runner.go:130] >         }
	I1213 14:49:35.082787 1297065 command_runner.go:130] >       }
	I1213 14:49:35.082790 1297065 command_runner.go:130] >     },
	I1213 14:49:35.082801 1297065 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1213 14:49:35.082809 1297065 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1213 14:49:35.082816 1297065 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1213 14:49:35.082820 1297065 command_runner.go:130] >     "disableApparmor": false,
	I1213 14:49:35.082825 1297065 command_runner.go:130] >     "disableHugetlbController": true,
	I1213 14:49:35.082832 1297065 command_runner.go:130] >     "disableProcMount": false,
	I1213 14:49:35.082839 1297065 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1213 14:49:35.082845 1297065 command_runner.go:130] >     "enableCDI": true,
	I1213 14:49:35.082850 1297065 command_runner.go:130] >     "enableSelinux": false,
	I1213 14:49:35.082857 1297065 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1213 14:49:35.082862 1297065 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1213 14:49:35.082866 1297065 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1213 14:49:35.082871 1297065 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1213 14:49:35.082875 1297065 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1213 14:49:35.082880 1297065 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1213 14:49:35.082887 1297065 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1213 14:49:35.082893 1297065 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1213 14:49:35.082904 1297065 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1213 14:49:35.082910 1297065 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1213 14:49:35.082915 1297065 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1213 14:49:35.082926 1297065 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1213 14:49:35.082932 1297065 command_runner.go:130] >   },
	I1213 14:49:35.082936 1297065 command_runner.go:130] >   "features": {
	I1213 14:49:35.082943 1297065 command_runner.go:130] >     "supplemental_groups_policy": true
	I1213 14:49:35.082946 1297065 command_runner.go:130] >   },
	I1213 14:49:35.082950 1297065 command_runner.go:130] >   "golang": "go1.24.9",
	I1213 14:49:35.082959 1297065 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 14:49:35.082976 1297065 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 14:49:35.082980 1297065 command_runner.go:130] >   "runtimeHandlers": [
	I1213 14:49:35.082984 1297065 command_runner.go:130] >     {
	I1213 14:49:35.082988 1297065 command_runner.go:130] >       "features": {
	I1213 14:49:35.083000 1297065 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 14:49:35.083004 1297065 command_runner.go:130] >         "user_namespaces": true
	I1213 14:49:35.083008 1297065 command_runner.go:130] >       }
	I1213 14:49:35.083012 1297065 command_runner.go:130] >     },
	I1213 14:49:35.083017 1297065 command_runner.go:130] >     {
	I1213 14:49:35.083021 1297065 command_runner.go:130] >       "features": {
	I1213 14:49:35.083026 1297065 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 14:49:35.083033 1297065 command_runner.go:130] >         "user_namespaces": true
	I1213 14:49:35.083041 1297065 command_runner.go:130] >       },
	I1213 14:49:35.083055 1297065 command_runner.go:130] >       "name": "runc"
	I1213 14:49:35.083058 1297065 command_runner.go:130] >     }
	I1213 14:49:35.083061 1297065 command_runner.go:130] >   ],
	I1213 14:49:35.083064 1297065 command_runner.go:130] >   "status": {
	I1213 14:49:35.083068 1297065 command_runner.go:130] >     "conditions": [
	I1213 14:49:35.083077 1297065 command_runner.go:130] >       {
	I1213 14:49:35.083081 1297065 command_runner.go:130] >         "message": "",
	I1213 14:49:35.083085 1297065 command_runner.go:130] >         "reason": "",
	I1213 14:49:35.083089 1297065 command_runner.go:130] >         "status": true,
	I1213 14:49:35.083098 1297065 command_runner.go:130] >         "type": "RuntimeReady"
	I1213 14:49:35.083104 1297065 command_runner.go:130] >       },
	I1213 14:49:35.083107 1297065 command_runner.go:130] >       {
	I1213 14:49:35.083113 1297065 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1213 14:49:35.083118 1297065 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1213 14:49:35.083122 1297065 command_runner.go:130] >         "status": false,
	I1213 14:49:35.083128 1297065 command_runner.go:130] >         "type": "NetworkReady"
	I1213 14:49:35.083132 1297065 command_runner.go:130] >       },
	I1213 14:49:35.083135 1297065 command_runner.go:130] >       {
	I1213 14:49:35.083160 1297065 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1213 14:49:35.083171 1297065 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1213 14:49:35.083176 1297065 command_runner.go:130] >         "status": false,
	I1213 14:49:35.083182 1297065 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1213 14:49:35.083186 1297065 command_runner.go:130] >       }
	I1213 14:49:35.083190 1297065 command_runner.go:130] >     ]
	I1213 14:49:35.083196 1297065 command_runner.go:130] >   }
	I1213 14:49:35.083199 1297065 command_runner.go:130] > }
	I1213 14:49:35.086343 1297065 cni.go:84] Creating CNI manager for ""
	I1213 14:49:35.086370 1297065 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 14:49:35.086397 1297065 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 14:49:35.086420 1297065 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-562018 NodeName:functional-562018 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 14:49:35.086540 1297065 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-562018"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 14:49:35.086621 1297065 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 14:49:35.094718 1297065 command_runner.go:130] > kubeadm
	I1213 14:49:35.094739 1297065 command_runner.go:130] > kubectl
	I1213 14:49:35.094743 1297065 command_runner.go:130] > kubelet
	I1213 14:49:35.094761 1297065 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 14:49:35.094814 1297065 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 14:49:35.102589 1297065 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 14:49:35.115905 1297065 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 14:49:35.129462 1297065 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 14:49:35.142335 1297065 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 14:49:35.146161 1297065 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 14:49:35.146280 1297065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:49:35.271079 1297065 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:49:35.585791 1297065 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018 for IP: 192.168.49.2
	I1213 14:49:35.585864 1297065 certs.go:195] generating shared ca certs ...
	I1213 14:49:35.585895 1297065 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:49:35.586063 1297065 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 14:49:35.586138 1297065 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 14:49:35.586175 1297065 certs.go:257] generating profile certs ...
	I1213 14:49:35.586327 1297065 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key
	I1213 14:49:35.586437 1297065 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key.d0505aee
	I1213 14:49:35.586523 1297065 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key
	I1213 14:49:35.586557 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 14:49:35.586602 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 14:49:35.586632 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 14:49:35.586672 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 14:49:35.586707 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 14:49:35.586737 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 14:49:35.586777 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 14:49:35.586811 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 14:49:35.586902 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 14:49:35.586962 1297065 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 14:49:35.586986 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 14:49:35.587046 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 14:49:35.587098 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 14:49:35.587157 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 14:49:35.587232 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 14:49:35.587302 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.587371 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem -> /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.587399 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.588006 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 14:49:35.609077 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 14:49:35.630697 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 14:49:35.652426 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 14:49:35.670342 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 14:49:35.687837 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 14:49:35.705877 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 14:49:35.723466 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 14:49:35.740679 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 14:49:35.758304 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 14:49:35.776736 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 14:49:35.794339 1297065 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 14:49:35.806740 1297065 ssh_runner.go:195] Run: openssl version
	I1213 14:49:35.812461 1297065 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 14:49:35.812883 1297065 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.820227 1297065 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 14:49:35.827978 1297065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.831610 1297065 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.831636 1297065 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.831688 1297065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.871766 1297065 command_runner.go:130] > b5213941
	I1213 14:49:35.872189 1297065 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 14:49:35.879531 1297065 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.886529 1297065 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 14:49:35.894015 1297065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.897550 1297065 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.897859 1297065 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.897930 1297065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.938203 1297065 command_runner.go:130] > 51391683
	I1213 14:49:35.938708 1297065 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 14:49:35.946069 1297065 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.953176 1297065 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 14:49:35.960486 1297065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.964477 1297065 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.964589 1297065 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.964665 1297065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 14:49:36.007360 1297065 command_runner.go:130] > 3ec20f2e
	I1213 14:49:36.007602 1297065 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 14:49:36.019390 1297065 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 14:49:36.024551 1297065 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 14:49:36.024587 1297065 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 14:49:36.024604 1297065 command_runner.go:130] > Device: 259,1	Inode: 2346070     Links: 1
	I1213 14:49:36.024612 1297065 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 14:49:36.024618 1297065 command_runner.go:130] > Access: 2025-12-13 14:45:28.579602026 +0000
	I1213 14:49:36.024623 1297065 command_runner.go:130] > Modify: 2025-12-13 14:41:24.464587226 +0000
	I1213 14:49:36.024628 1297065 command_runner.go:130] > Change: 2025-12-13 14:41:24.464587226 +0000
	I1213 14:49:36.024634 1297065 command_runner.go:130] >  Birth: 2025-12-13 14:41:24.464587226 +0000
	I1213 14:49:36.024743 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 14:49:36.067430 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.067964 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 14:49:36.109753 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.110299 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 14:49:36.151650 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.152123 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 14:49:36.199598 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.200366 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 14:49:36.241923 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.242478 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 14:49:36.282927 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.283387 1297065 kubeadm.go:401] StartCluster: {Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:49:36.283480 1297065 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 14:49:36.283586 1297065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 14:49:36.308975 1297065 cri.go:89] found id: ""
	I1213 14:49:36.309092 1297065 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 14:49:36.316103 1297065 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 14:49:36.316129 1297065 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 14:49:36.316138 1297065 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 14:49:36.317085 1297065 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 14:49:36.317145 1297065 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 14:49:36.317231 1297065 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 14:49:36.324724 1297065 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:49:36.325158 1297065 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-562018" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:36.325271 1297065 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-1251074/kubeconfig needs updating (will repair): [kubeconfig missing "functional-562018" cluster setting kubeconfig missing "functional-562018" context setting]
	I1213 14:49:36.325603 1297065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:49:36.326011 1297065 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:36.326154 1297065 kapi.go:59] client config for functional-562018: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key", CAFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 14:49:36.326701 1297065 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 14:49:36.326719 1297065 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 14:49:36.326724 1297065 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 14:49:36.326733 1297065 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 14:49:36.326744 1297065 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 14:49:36.327001 1297065 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 14:49:36.327093 1297065 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 14:49:36.334496 1297065 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 14:49:36.334531 1297065 kubeadm.go:602] duration metric: took 17.366177ms to restartPrimaryControlPlane
	I1213 14:49:36.334540 1297065 kubeadm.go:403] duration metric: took 51.160034ms to StartCluster
	I1213 14:49:36.334555 1297065 settings.go:142] acquiring lock: {Name:mk3e38eb9635dd950ddf8081cc0598a8c10049c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:49:36.334613 1297065 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:36.335214 1297065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:49:36.335450 1297065 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 14:49:36.335789 1297065 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:49:36.335866 1297065 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 14:49:36.335932 1297065 addons.go:70] Setting storage-provisioner=true in profile "functional-562018"
	I1213 14:49:36.335945 1297065 addons.go:239] Setting addon storage-provisioner=true in "functional-562018"
	I1213 14:49:36.335975 1297065 host.go:66] Checking if "functional-562018" exists ...
	I1213 14:49:36.336461 1297065 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:49:36.336835 1297065 addons.go:70] Setting default-storageclass=true in profile "functional-562018"
	I1213 14:49:36.336857 1297065 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-562018"
	I1213 14:49:36.337151 1297065 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:49:36.340699 1297065 out.go:179] * Verifying Kubernetes components...
	I1213 14:49:36.343477 1297065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:49:36.374082 1297065 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 14:49:36.376797 1297065 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:36.376892 1297065 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:36.376917 1297065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 14:49:36.376979 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:36.377245 1297065 kapi.go:59] client config for functional-562018: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key", CAFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 14:49:36.377532 1297065 addons.go:239] Setting addon default-storageclass=true in "functional-562018"
	I1213 14:49:36.377566 1297065 host.go:66] Checking if "functional-562018" exists ...
	I1213 14:49:36.377992 1297065 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:49:36.415567 1297065 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:36.415590 1297065 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 14:49:36.415656 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:36.416969 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:36.442534 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:36.534721 1297065 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:49:36.592567 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:36.600370 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:37.335898 1297065 node_ready.go:35] waiting up to 6m0s for node "functional-562018" to be "Ready" ...
	I1213 14:49:37.335934 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:37.336074 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.336106 1297065 retry.go:31] will retry after 199.574589ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.336165 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:37.336178 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.336184 1297065 retry.go:31] will retry after 285.216803ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.336272 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:37.336359 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:37.336679 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:37.536000 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:37.591050 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:37.594766 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.594797 1297065 retry.go:31] will retry after 489.410948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.621926 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:37.677113 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:37.681307 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.681342 1297065 retry.go:31] will retry after 401.770697ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.836587 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:37.836683 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:37.837004 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:38.083592 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:38.085139 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:38.190416 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:38.194296 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.194326 1297065 retry.go:31] will retry after 757.686696ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.207705 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:38.207792 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.207830 1297065 retry.go:31] will retry after 505.194475ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.337091 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:38.337186 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:38.337548 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:38.714015 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:38.783498 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:38.783559 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.783593 1297065 retry.go:31] will retry after 988.219406ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.836722 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:38.836873 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:38.837238 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:38.952600 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:39.020705 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:39.020749 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:39.020768 1297065 retry.go:31] will retry after 1.072702638s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:39.337164 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:39.337235 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:39.337545 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:39.337593 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:39.772102 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:39.836685 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:39.836850 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:39.837201 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:39.843566 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:39.843633 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:39.843675 1297065 retry.go:31] will retry after 1.296209829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:40.093780 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:40.156222 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:40.156329 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:40.156372 1297065 retry.go:31] will retry after 965.768616ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:40.336552 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:40.336651 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:40.336970 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:40.836822 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:40.836895 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:40.837217 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:41.122779 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:41.140323 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:41.215097 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:41.215182 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:41.215214 1297065 retry.go:31] will retry after 2.369565148s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:41.219568 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:41.219636 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:41.219656 1297065 retry.go:31] will retry after 2.455142313s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:41.336947 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:41.337019 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:41.337416 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:41.837047 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:41.837124 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:41.837388 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:41.837438 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:42.337111 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:42.337201 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:42.337621 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:42.836363 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:42.836444 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:42.836803 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:43.336226 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:43.336304 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:43.336552 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:43.585084 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:43.645189 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:43.649081 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:43.649137 1297065 retry.go:31] will retry after 3.995275361s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:43.675423 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:43.738811 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:43.738856 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:43.738876 1297065 retry.go:31] will retry after 3.319355388s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:43.837038 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:43.837127 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:43.837467 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:43.837521 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:44.336215 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:44.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:44.336669 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:44.836348 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:44.836422 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:44.836715 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:45.336277 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:45.336359 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:45.336715 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:45.836385 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:45.836482 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:45.836839 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:46.336842 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:46.336917 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:46.337174 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:46.337224 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:46.836567 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:46.836641 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:46.837050 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:47.058405 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:47.140540 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:47.144585 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:47.144615 1297065 retry.go:31] will retry after 3.814662677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:47.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:47.336338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:47.336673 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:47.645178 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:47.704569 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:47.708191 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:47.708226 1297065 retry.go:31] will retry after 4.571128182s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:47.836452 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:47.836522 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:47.836787 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:48.336260 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:48.336350 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:48.336628 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:48.836239 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:48.836318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:48.836676 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:48.836743 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:49.336220 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:49.336290 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:49.336531 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:49.836286 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:49.836362 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:49.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:50.336380 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:50.336455 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:50.336799 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:50.836292 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:50.836366 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:50.836651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:50.960127 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:51.026705 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:51.026749 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:51.026767 1297065 retry.go:31] will retry after 9.152833031s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:51.336157 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:51.336252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:51.336592 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:51.336645 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:51.836328 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:51.836439 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:51.836752 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:52.280634 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:52.336265 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:52.336348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:52.336649 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:52.351151 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:52.351191 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:52.351210 1297065 retry.go:31] will retry after 6.806315756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:52.837084 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:52.837176 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:52.837503 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:53.336231 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:53.336347 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:53.336679 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:53.336735 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:53.836278 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:53.836358 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:53.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:54.336375 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:54.336453 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:54.336787 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:54.836534 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:54.836609 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:54.836960 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:55.336530 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:55.336608 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:55.336965 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:55.337034 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:55.836817 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:55.836889 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:55.837215 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:56.337019 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:56.337095 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:56.337433 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:56.836157 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:56.836242 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:56.836511 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:57.336450 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:57.336529 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:57.336899 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:57.836240 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:57.836331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:57.836629 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:57.836681 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:58.336184 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:58.336276 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:58.336593 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:58.836286 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:58.836386 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:58.836728 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:59.158224 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:59.216557 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:59.216609 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:59.216627 1297065 retry.go:31] will retry after 13.782587086s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:59.336899 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:59.336976 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:59.337309 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:59.837050 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:59.837117 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:59.837393 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:59.837436 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:00.179978 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:50:00.336210 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:00.336301 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:00.337482 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 14:50:00.358964 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:00.359008 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:00.359030 1297065 retry.go:31] will retry after 12.357990487s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:00.836789 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:00.836882 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:00.837203 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:01.336532 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:01.336609 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:01.336921 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:01.836255 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:01.836341 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:01.836652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:02.336512 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:02.336592 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:02.336956 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:02.337013 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:02.836530 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:02.836611 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:02.836888 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:03.336279 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:03.336358 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:03.336701 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:03.836401 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:03.836474 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:03.836845 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:04.336380 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:04.336454 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:04.336784 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:04.836248 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:04.836328 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:04.836660 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:04.836716 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:05.336407 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:05.336490 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:05.336806 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:05.836467 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:05.836548 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:05.836865 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:06.336870 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:06.336953 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:06.337350 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:06.837024 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:06.837097 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:06.837419 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:06.837478 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:07.336416 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:07.336488 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:07.336747 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:07.836490 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:07.836567 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:07.836906 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:08.336625 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:08.336699 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:08.337020 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:08.836518 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:08.836588 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:08.836865 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:09.336612 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:09.336692 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:09.337049 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:09.337109 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:09.836858 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:09.836939 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:09.837272 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:10.337051 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:10.337125 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:10.337387 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:10.837153 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:10.837234 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:10.837582 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:11.336183 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:11.336261 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:11.336582 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:11.836188 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:11.836256 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:11.836517 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:11.836567 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:12.336515 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:12.336595 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:12.336916 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:12.717305 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:50:12.775348 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:12.775393 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:12.775414 1297065 retry.go:31] will retry after 16.474515121s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:12.836567 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:12.836650 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:12.837019 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:13.000372 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:50:13.059399 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:13.063613 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:13.063652 1297065 retry.go:31] will retry after 8.071550656s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:13.336122 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:13.336199 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:13.336467 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:13.836136 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:13.836218 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:13.836591 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:13.836660 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:14.336363 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:14.336438 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:14.336774 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:14.836188 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:14.836261 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:14.836540 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:15.336277 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:15.336353 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:15.336677 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:15.836219 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:15.836293 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:15.836588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:16.336546 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:16.336617 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:16.336864 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:16.336904 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:16.836265 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:16.836348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:16.836648 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:17.336586 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:17.336661 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:17.337008 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:17.836520 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:17.836585 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:17.836858 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:18.336257 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:18.336354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:18.336715 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:18.836428 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:18.836502 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:18.836842 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:18.836899 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:19.336242 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:19.336311 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:19.336563 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:19.836231 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:19.836306 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:19.836619 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:20.336334 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:20.336416 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:20.336797 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:20.836189 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:20.836268 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:20.836618 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:21.136217 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:50:21.193283 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:21.196963 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:21.196996 1297065 retry.go:31] will retry after 15.530830741s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:21.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:21.336329 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:21.336627 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:21.336677 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:21.836352 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:21.836433 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:21.836751 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:22.336543 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:22.336615 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:22.336948 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:22.836275 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:22.836348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:22.836696 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:23.336400 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:23.336482 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:23.336828 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:23.336887 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:23.836198 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:23.836296 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:23.836661 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:24.336254 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:24.336329 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:24.336641 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:24.836327 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:24.836403 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:24.836743 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:25.336278 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:25.336354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:25.336703 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:25.836226 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:25.836309 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:25.836679 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:25.836740 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:26.337200 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:26.337293 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:26.337628 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:26.836405 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:26.836480 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:26.836777 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:27.336562 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:27.336653 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:27.337005 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:27.836230 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:27.836307 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:27.836657 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:28.336177 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:28.336267 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:28.336587 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:28.336638 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:28.836250 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:28.836332 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:28.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:29.250199 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:50:29.308318 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:29.311716 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:29.311747 1297065 retry.go:31] will retry after 30.463725654s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:29.336999 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:29.337080 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:29.337458 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:29.836155 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:29.836222 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:29.836520 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:30.336243 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:30.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:30.336620 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:30.336669 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:30.836228 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:30.836325 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:30.836623 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:31.336183 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:31.336285 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:31.336588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:31.836269 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:31.836340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:31.836687 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:32.336490 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:32.336568 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:32.336902 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:32.336957 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:32.836192 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:32.836262 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:32.836598 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:33.336250 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:33.336349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:33.336707 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:33.836245 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:33.836323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:33.836664 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:34.336184 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:34.336253 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:34.336535 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:34.836284 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:34.836360 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:34.836791 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:34.836848 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:35.336516 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:35.336596 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:35.336938 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:35.836527 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:35.836598 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:35.836864 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:36.336942 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:36.337020 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:36.337342 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:36.728993 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:50:36.785078 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:36.788836 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:36.788868 1297065 retry.go:31] will retry after 31.693829046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:36.837069 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:36.837145 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:36.837461 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:36.837513 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:37.336193 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:37.336260 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:37.336549 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:37.836234 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:37.836312 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:37.836628 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:38.336250 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:38.336329 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:38.336672 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:38.836242 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:38.836323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:38.836588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:39.336229 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:39.336304 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:39.336650 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:39.336709 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:39.836236 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:39.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:39.836618 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:40.336280 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:40.336355 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:40.336614 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:40.836272 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:40.836382 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:40.836789 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:41.336524 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:41.336601 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:41.336927 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:41.336987 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:41.836201 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:41.836278 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:41.836626 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:42.336633 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:42.336716 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:42.337072 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:42.836881 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:42.836955 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:42.837306 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:43.337071 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:43.337144 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:43.337415 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:43.337468 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:43.836983 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:43.837056 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:43.837412 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:44.336153 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:44.336229 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:44.336573 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:44.836267 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:44.836356 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:44.836695 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:45.336450 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:45.336538 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:45.336949 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:45.836752 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:45.836829 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:45.837178 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:45.837235 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:46.336981 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:46.337060 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:46.337351 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:46.836213 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:46.836319 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:46.836733 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:47.336523 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:47.336680 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:47.336969 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:47.836511 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:47.836579 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:47.836844 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:48.336215 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:48.336310 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:48.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:48.336704 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:48.836371 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:48.836487 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:48.836832 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:49.336188 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:49.336255 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:49.336544 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:49.836263 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:49.836365 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:49.836653 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:50.336392 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:50.336468 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:50.336808 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:50.336866 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:50.836325 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:50.836439 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:50.836713 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:51.336252 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:51.336346 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:51.336691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:51.836280 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:51.836353 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:51.836671 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:52.336550 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:52.336630 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:52.336893 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:52.336943 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:52.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:52.836322 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:52.836667 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:53.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:53.336342 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:53.336699 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:53.836191 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:53.836264 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:53.836543 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:54.336246 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:54.336345 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:54.336700 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:54.836402 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:54.836475 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:54.836811 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:54.836869 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:55.336360 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:55.336437 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:55.336727 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:55.836432 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:55.836512 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:55.836850 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:56.337034 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:56.337132 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:56.337451 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:56.836142 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:56.836214 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:56.836473 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:57.336469 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:57.336554 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:57.336893 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:57.336949 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:57.836297 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:57.836381 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:57.836714 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:58.336385 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:58.336465 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:58.336743 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:58.836460 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:58.836541 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:58.836889 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:59.336265 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:59.336340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:59.336697 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:59.776318 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:50:59.836157 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:59.836232 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:59.836466 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:59.836509 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:59.839555 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:59.839592 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:59.839611 1297065 retry.go:31] will retry after 31.022889465s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:51:00.336284 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:00.336385 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:00.337017 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:00.836870 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:00.836951 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:00.837274 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:01.337018 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:01.337093 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:01.337377 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:01.836106 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:01.836178 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:01.836532 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:01.836591 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:02.336582 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:02.336658 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:02.336989 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:02.836525 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:02.836602 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:02.836897 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:03.336270 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:03.336349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:03.336732 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:03.836448 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:03.836526 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:03.836864 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:03.836920 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:04.336555 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:04.336630 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:04.336888 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:04.836543 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:04.836644 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:04.836971 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:05.336771 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:05.336847 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:05.337186 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:05.836528 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:05.836603 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:05.836858 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:06.336901 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:06.336978 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:06.337275 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:06.337322 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:06.836616 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:06.836698 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:06.837028 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:07.336511 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:07.336579 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:07.336834 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:07.836242 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:07.836317 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:07.836644 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:08.336229 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:08.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:08.336668 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:08.482933 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:51:08.546772 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:51:08.546820 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:51:08.546914 1297065 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 14:51:08.836114 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:08.836184 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:08.836454 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:08.836495 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:09.336176 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:09.336252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:09.336597 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:09.836346 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:09.836424 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:09.836727 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:10.336174 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:10.336265 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:10.336548 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:10.836166 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:10.836272 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:10.836571 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:10.836621 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:11.336180 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:11.336265 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:11.336582 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:11.836217 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:11.836300 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:11.836626 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:12.336568 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:12.336663 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:12.336970 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:12.836801 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:12.836879 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:12.837247 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:12.837301 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:13.336980 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:13.337062 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:13.337320 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:13.837125 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:13.837211 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:13.837540 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:14.336301 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:14.336390 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:14.336757 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:14.836166 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:14.836241 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:14.836499 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:15.336228 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:15.336300 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:15.336648 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:15.336706 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:15.836385 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:15.836461 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:15.836787 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:16.336816 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:16.336889 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:16.337169 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:16.836948 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:16.837028 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:16.837350 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:17.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:17.336319 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:17.336641 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:17.836172 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:17.836252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:17.836555 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:17.836606 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:18.336236 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:18.336313 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:18.336641 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:18.836373 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:18.836452 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:18.836760 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:19.336167 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:19.336238 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:19.336538 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:19.836221 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:19.836297 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:19.836617 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:19.836675 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:20.336339 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:20.336412 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:20.336771 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:20.836180 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:20.836251 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:20.836567 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:21.336259 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:21.336338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:21.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:21.836380 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:21.836462 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:21.836800 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:21.836855 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:22.336522 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:22.336591 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:22.336867 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:22.836547 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:22.836626 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:22.836957 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:23.336750 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:23.336825 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:23.337141 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:23.836507 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:23.836582 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:23.836841 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:23.836883 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:24.336607 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:24.336681 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:24.337016 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:24.836840 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:24.836916 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:24.837240 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:25.336547 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:25.336619 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:25.336933 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:25.836630 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:25.836712 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:25.837049 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:25.837104 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:26.337004 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:26.337079 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:26.337406 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:26.836128 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:26.836203 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:26.836467 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:27.336400 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:27.336476 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:27.336833 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:27.836240 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:27.836325 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:27.836680 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:28.336379 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:28.336452 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:28.336710 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:28.336750 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:28.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:28.836333 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:28.836661 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:29.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:29.336331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:29.336705 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:29.836347 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:29.836422 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:29.836690 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:30.336280 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:30.336351 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:30.336706 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:30.836402 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:30.836499 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:30.836836 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:30.836891 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:30.863046 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:51:30.922204 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:51:30.922247 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:51:30.922363 1297065 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 14:51:30.925463 1297065 out.go:179] * Enabled addons: 
	I1213 14:51:30.929007 1297065 addons.go:530] duration metric: took 1m54.593151344s for enable addons: enabled=[]
	I1213 14:51:31.336478 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:31.336574 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:31.336911 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:31.836663 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:31.836742 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:31.837400 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:32.336285 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:32.336362 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:32.337832 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 14:51:32.836218 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:32.836313 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:32.836623 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:33.336237 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:33.336311 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:33.336634 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:33.336688 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:33.836221 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:33.836296 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:33.836630 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:34.336182 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:34.336256 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:34.336569 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:34.836237 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:34.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:34.836646 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:35.336246 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:35.336327 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:35.336677 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:35.336739 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:35.836381 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:35.836450 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:35.836754 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:36.336847 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:36.336928 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:36.337255 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:36.836533 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:36.836613 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:36.836939 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:37.336507 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:37.336573 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:37.336839 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:37.336879 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:37.836228 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:37.836301 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:37.836594 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:38.336263 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:38.336341 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:38.336675 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:38.836200 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:38.836285 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:38.836811 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:39.336276 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:39.336373 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:39.336728 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:39.836259 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:39.836349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:39.836684 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:39.836742 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:40.336215 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:40.336295 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:40.336618 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:40.836435 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:40.836524 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:40.836905 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:41.336775 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:41.336851 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:41.337175 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:41.836559 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:41.836631 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:41.836894 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:41.836936 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:42.336658 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:42.336748 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:42.337128 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:42.836909 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:42.836987 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:42.837289 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:43.337040 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:43.337127 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:43.337474 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:43.836192 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:43.836275 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:43.836624 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:44.336291 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:44.336388 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:44.336784 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:44.336841 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:44.836180 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:44.836254 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:44.836551 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:45.336321 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:45.336400 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:45.336690 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:45.836435 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:45.836510 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:45.836833 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:46.336779 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:46.336848 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:46.337141 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:46.337201 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:46.836518 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:46.836596 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:46.836935 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:47.336893 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:47.336971 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:47.337308 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:47.836529 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:47.836614 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:47.836876 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:48.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:48.336337 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:48.336692 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:48.836415 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:48.836494 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:48.836834 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:48.836892 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:49.336543 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:49.336621 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:49.336897 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:49.836323 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:49.836400 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:49.836703 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:50.336284 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:50.336361 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:50.336695 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:50.836346 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:50.836424 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:50.836742 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:51.336225 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:51.336303 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:51.336650 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:51.336709 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:51.836373 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:51.836452 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:51.836792 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:52.336469 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:52.336538 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:52.336793 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:52.836252 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:52.836345 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:52.836678 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:53.336269 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:53.336349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:53.336675 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:53.336740 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:53.836126 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:53.836205 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:53.836462 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:54.336204 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:54.336277 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:54.336594 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:54.836224 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:54.836301 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:54.836659 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:55.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:55.336331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:55.336616 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:55.836312 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:55.836389 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:55.836728 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:55.836782 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:56.336654 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:56.336732 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:56.337071 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:56.836525 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:56.836605 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:56.836887 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:57.336719 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:57.336796 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:57.337143 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:57.836841 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:57.836920 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:57.837247 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:57.837302 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:58.337040 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:58.337110 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:58.337364 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:58.837119 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:58.837198 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:58.837538 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:59.336279 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:59.336367 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:59.336734 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:59.836438 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:59.836511 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:59.836774 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:00.355395 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:00.355523 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:00.355852 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:00.355945 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:00.836731 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:00.836813 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:00.837145 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:01.336514 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:01.336595 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:01.336898 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:01.836757 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:01.836832 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:01.837174 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:02.336946 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:02.337023 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:02.337363 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:02.836523 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:02.836599 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:02.836906 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:02.836965 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:03.336199 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:03.336271 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:03.336598 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:03.836313 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:03.836395 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:03.836725 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:04.336141 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:04.336218 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:04.336472 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:04.836200 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:04.836276 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:04.836590 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:05.336247 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:05.336324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:05.336654 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:05.336712 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:05.836185 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:05.836257 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:05.836570 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:06.336596 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:06.336670 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:06.337028 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:06.836851 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:06.836932 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:06.837278 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:07.337039 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:07.337104 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:07.337364 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:07.337404 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:07.837188 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:07.837264 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:07.837630 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:08.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:08.336324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:08.336644 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:08.836195 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:08.836269 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:08.836588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:09.336284 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:09.336374 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:09.336691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:09.836403 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:09.836488 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:09.836831 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:09.836885 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:10.336187 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:10.336264 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:10.336588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:10.836238 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:10.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:10.836652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:11.336250 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:11.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:11.336683 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:11.836362 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:11.836437 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:11.836693 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:12.336616 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:12.336691 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:12.337039 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:12.337098 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:12.836854 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:12.836931 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:12.837269 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:13.337012 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:13.337077 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:13.337331 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:13.837136 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:13.837214 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:13.837562 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:14.336275 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:14.336353 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:14.336653 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:14.836184 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:14.836257 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:14.836550 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:14.836598 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:15.336241 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:15.336321 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:15.336652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:15.836388 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:15.836477 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:15.836811 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:16.336837 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:16.336907 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:16.337175 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:16.836969 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:16.837065 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:16.837433 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:16.837491 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:17.336248 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:17.336323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:17.336684 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:17.836232 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:17.836298 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:17.836601 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:18.336212 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:18.336289 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:18.336616 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:18.836403 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:18.836489 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:18.836838 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:19.336195 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:19.336269 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:19.336594 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:19.336650 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:19.836346 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:19.836429 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:19.836796 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:20.336507 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:20.336589 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:20.336899 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:20.836181 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:20.836258 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:20.836517 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:21.336228 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:21.336302 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:21.336632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:21.336692 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:21.836262 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:21.836338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:21.836657 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:22.336526 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:22.336604 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:22.336882 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:22.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:22.836317 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:22.836632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:23.336264 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:23.336373 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:23.336709 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:23.336768 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:23.836406 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:23.836477 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:23.836791 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:24.336241 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:24.336336 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:24.336674 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:24.836391 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:24.836474 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:24.836800 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:25.336188 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:25.336256 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:25.336595 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:25.836360 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:25.836444 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:25.836782 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:25.836842 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:26.336659 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:26.336742 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:26.337133 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:26.836533 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:26.836602 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:26.836915 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:27.336718 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:27.336789 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:27.337149 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:27.836949 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:27.837024 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:27.837383 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:27.837440 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:28.337164 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:28.337233 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:28.337486 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:28.836203 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:28.836284 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:28.836624 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:29.336359 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:29.336444 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:29.336786 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:29.836473 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:29.836542 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:29.836800 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:30.336266 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:30.336346 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:30.336712 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:30.336778 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:30.836448 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:30.836530 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:30.836895 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:31.336594 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:31.336667 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:31.336934 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:31.836251 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:31.836334 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:31.836670 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:32.336445 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:32.336545 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:32.336826 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:32.336874 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:32.836183 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:32.836254 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:32.836608 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:33.336221 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:33.336296 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:33.336654 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:33.836240 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:33.836319 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:33.836658 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:34.336330 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:34.336399 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:34.336664 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:34.836346 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:34.836426 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:34.836772 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:34.836831 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:35.336328 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:35.336410 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:35.336774 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:35.836188 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:35.836261 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:35.836582 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:36.336650 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:36.336733 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:36.337068 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:36.836880 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:36.836955 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:36.837277 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:36.837337 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:37.336192 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:37.336266 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:37.336525 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:37.836226 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:37.836309 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:37.836638 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:38.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:38.336322 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:38.336672 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:38.836202 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:38.836273 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:38.836547 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:39.336280 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:39.336358 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:39.336701 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:39.336754 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:39.836426 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:39.836508 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:39.836821 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:40.336191 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:40.336263 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:40.336564 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:40.836260 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:40.836361 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:40.836721 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:41.336424 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:41.336505 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:41.336831 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:41.336888 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:41.836209 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:41.836299 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:41.836584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:42.336696 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:42.336785 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:42.337191 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:42.836996 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:42.837071 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:42.837403 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:43.336118 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:43.336196 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:43.336449 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:43.836158 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:43.836243 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:43.836549 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:43.836602 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:44.336242 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:44.336324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:44.336613 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:44.836191 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:44.836266 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:44.836521 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:45.336296 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:45.336373 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:45.336712 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:45.836269 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:45.836353 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:45.836712 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:45.836772 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:46.336576 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:46.336646 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:46.336952 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:46.836262 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:46.836354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:46.836668 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:47.336578 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:47.336658 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:47.336990 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:47.836518 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:47.836598 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:47.836865 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:47.836918 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:48.336636 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:48.336714 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:48.337035 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:48.836837 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:48.836909 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:48.837235 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:49.336532 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:49.336609 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:49.336874 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:49.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:49.836320 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:49.836663 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:50.336257 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:50.336343 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:50.336683 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:50.336737 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:50.836244 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:50.836318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:50.836588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:51.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:51.336340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:51.336658 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:51.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:51.836323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:51.836652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:52.336454 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:52.336534 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:52.336820 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:52.336867 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:52.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:52.836323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:52.836674 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:53.336385 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:53.336470 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:53.336787 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:53.836193 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:53.836269 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:53.836583 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:54.336266 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:54.336348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:54.336708 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:54.836271 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:54.836348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:54.836719 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:54.836775 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:55.336414 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:55.336481 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:55.336738 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:55.836424 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:55.836502 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:55.836840 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:56.336926 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:56.337006 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:56.337393 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:56.837161 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:56.837240 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:56.837514 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:56.837556 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:57.336486 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:57.336562 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:57.336904 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:57.836247 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:57.836326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:57.836646 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:58.336169 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:58.336261 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:58.336585 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:58.836253 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:58.836338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:58.836686 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:59.336405 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:59.336488 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:59.336818 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:59.336881 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:59.836205 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:59.836279 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:59.836602 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:00.336348 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:00.336434 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:00.336755 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:00.836458 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:00.836538 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:00.836919 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:01.336481 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:01.336559 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:01.336870 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:01.336917 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:01.836269 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:01.836349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:01.836651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:02.336510 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:02.336585 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:02.336875 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:02.836559 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:02.836633 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:02.836887 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:03.336237 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:03.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:03.336652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:03.836262 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:03.836343 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:03.836681 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:03.836743 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:04.336178 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:04.336263 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:04.336579 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:04.836312 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:04.836395 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:04.836768 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:05.336328 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:05.336405 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:05.336722 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:05.836169 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:05.836249 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:05.836532 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:06.337061 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:06.337133 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:06.337448 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:06.337510 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:06.836170 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:06.836312 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:06.836648 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:07.336505 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:07.336579 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:07.336834 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:07.836162 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:07.836243 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:07.836604 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:08.336414 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:08.336492 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:08.336833 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:08.836389 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:08.836459 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:08.836768 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:08.836825 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:09.336249 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:09.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:09.336671 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:09.836385 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:09.836463 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:09.836810 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:10.336515 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:10.336589 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:10.336857 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:10.836237 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:10.836310 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:10.836668 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:11.336409 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:11.336502 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:11.336898 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:11.336954 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:11.836193 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:11.836273 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:11.836553 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:12.336497 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:12.336582 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:12.336916 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:12.836273 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:12.836346 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:12.836677 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:13.336363 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:13.336435 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:13.336727 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:13.836260 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:13.836336 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:13.836645 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:13.836693 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:14.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:14.336320 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:14.336647 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:14.836180 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:14.836273 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:14.836579 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:15.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:15.336342 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:15.336712 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:15.836446 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:15.836528 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:15.836857 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:15.836911 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:16.336886 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:16.336953 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:16.337211 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:16.836970 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:16.837043 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:16.837375 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:17.336898 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:17.336971 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:17.337298 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:17.837031 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:17.837110 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:17.837392 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:17.837435 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:18.336966 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:18.337049 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:18.337376 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:18.837166 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:18.837253 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:18.837689 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:19.336193 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:19.336283 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:19.336617 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:19.836252 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:19.836332 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:19.836666 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:20.336399 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:20.336476 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:20.336824 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:20.336877 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:20.836528 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:20.836607 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:20.836879 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:21.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:21.336319 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:21.336627 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:21.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:21.836331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:21.836682 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:22.336425 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:22.336492 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:22.336751 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:22.836221 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:22.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:22.836646 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:22.836701 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:23.336413 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:23.336491 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:23.336832 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:23.836195 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:23.836282 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:23.836590 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:24.336251 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:24.336331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:24.336671 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:24.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:24.836325 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:24.836645 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:25.336331 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:25.336403 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:25.336743 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:25.336792 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:25.836245 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:25.836324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:25.836644 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:26.336605 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:26.336680 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:26.337038 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:26.836509 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:26.836578 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:26.836824 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:27.336452 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:27.336525 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:27.336887 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:27.336942 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:27.836486 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:27.836568 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:27.836917 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:28.336112 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:28.336186 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:28.336563 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:28.836282 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:28.836357 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:28.836713 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:29.336309 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:29.336384 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:29.336723 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:29.836403 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:29.836478 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:29.836733 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:29.836776 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:30.336220 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:30.336298 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:30.336637 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:30.836357 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:30.836431 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:30.836763 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:31.336439 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:31.336532 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:31.336820 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:31.836503 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:31.836582 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:31.836898 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:31.836954 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:32.336893 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:32.336969 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:32.337280 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:32.837017 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:32.837102 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:32.837392 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:33.336206 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:33.336289 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:33.336624 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:33.836226 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:33.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:33.836661 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:34.336143 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:34.336223 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:34.336515 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:34.336566 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:34.836223 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:34.836309 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:34.836660 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:35.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:35.336337 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:35.336768 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:35.836351 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:35.836427 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:35.836683 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:36.336777 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:36.336851 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:36.337168 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:36.337222 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:36.837003 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:36.837084 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:36.837449 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:37.336375 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:37.336445 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:37.336691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:37.836402 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:37.836479 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:37.836826 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:38.336440 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:38.336525 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:38.336860 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:38.836259 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:38.836339 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:38.836606 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:38.836659 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:39.336506 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:39.336596 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:39.337235 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:39.836335 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:39.836421 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:39.836794 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:40.336522 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:40.336587 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:40.336888 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:40.836592 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:40.836674 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:40.837021 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:40.837076 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:41.336578 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:41.336655 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:41.336975 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:41.836525 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:41.836604 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:41.836959 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:42.336767 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:42.336851 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:42.337172 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:42.836977 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:42.837055 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:42.837406 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:42.837463 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:43.336096 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:43.336165 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:43.336522 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:43.836216 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:43.836315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:43.836677 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:44.336275 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:44.336366 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:44.336718 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:44.836181 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:44.836246 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:44.836531 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:45.336294 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:45.336384 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:45.336759 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:45.336815 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:45.836495 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:45.836571 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:45.836902 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:46.336923 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:46.336991 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:46.337248 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:46.836581 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:46.836658 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:46.836955 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:47.336876 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:47.336959 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:47.337291 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:47.337349 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:47.837127 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:47.837195 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:47.837512 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:48.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:48.336347 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:48.336704 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:48.836213 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:48.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:48.836647 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:49.336183 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:49.336258 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:49.336584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:49.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:49.836330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:49.836652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:49.836707 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:50.336396 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:50.336475 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:50.336802 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:50.836181 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:50.836252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:50.836524 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:51.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:51.336340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:51.336661 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:51.836254 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:51.836339 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:51.836673 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:51.836731 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:52.336439 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:52.336508 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:52.336813 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:52.836552 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:52.836646 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:52.837037 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:53.336867 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:53.336943 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:53.337248 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:53.836529 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:53.836600 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:53.836882 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:53.836925 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:54.336730 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:54.336804 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:54.337142 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:54.836954 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:54.837030 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:54.837375 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:55.337104 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:55.337186 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:55.337475 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:55.836190 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:55.836268 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:55.836616 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:56.336432 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:56.336515 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:56.336847 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:56.336900 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:56.836181 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:56.836260 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:56.836553 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:57.336500 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:57.336575 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:57.336934 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:57.836737 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:57.836827 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:57.837184 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:58.336550 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:58.336646 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:58.336966 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:58.337018 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:58.836741 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:58.836828 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:58.837162 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:59.336945 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:59.337026 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:59.337378 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:59.836973 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:59.837043 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:59.837302 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:00.337185 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:00.337285 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:00.337926 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:00.338025 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:00.836242 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:00.836326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:00.836691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:01.336224 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:01.336316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:01.336589 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:01.836224 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:01.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:01.836636 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:02.336530 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:02.336607 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:02.336904 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:02.836600 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:02.836677 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:02.837015 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:02.837082 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:03.336835 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:03.336910 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:03.337276 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:03.837094 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:03.837170 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:03.837522 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:04.336212 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:04.336283 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:04.336559 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:04.836246 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:04.836354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:04.836699 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:05.336254 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:05.336341 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:05.336691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:05.336745 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:05.836209 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:05.836288 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:05.836622 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:06.336695 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:06.336783 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:06.337108 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:06.836892 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:06.836966 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:06.837308 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:07.336123 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:07.336192 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:07.336465 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:07.836757 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:07.836832 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:07.837160 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:07.837217 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:08.336959 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:08.337035 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:08.337354 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:08.837047 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:08.837117 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:08.837375 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:09.336797 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:09.336876 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:09.337176 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:09.836976 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:09.837060 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:09.837357 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:09.837405 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:10.337145 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:10.337219 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:10.337522 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:10.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:10.836324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:10.836624 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:11.336249 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:11.336335 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:11.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:11.836251 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:11.836329 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:11.836584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:12.336557 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:12.336629 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:12.336964 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:12.337021 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:12.836792 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:12.836867 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:12.837180 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:13.336499 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:13.336580 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:13.336912 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:13.836538 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:13.836617 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:13.836932 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:14.336207 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:14.336299 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:14.336627 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:14.836329 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:14.836410 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:14.836729 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:14.836786 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:15.336238 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:15.336371 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:15.336658 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:15.836347 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:15.836425 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:15.836765 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:16.336570 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:16.336641 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:16.336896 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:16.836234 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:16.836313 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:16.836636 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:17.336499 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:17.336578 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:17.336890 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:17.336950 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:17.836161 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:17.836245 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:17.836561 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:18.336250 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:18.336345 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:18.336673 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:18.836422 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:18.836499 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:18.836856 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:19.336539 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:19.336609 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:19.336871 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:19.836236 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:19.836313 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:19.836657 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:19.836712 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:20.336398 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:20.336479 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:20.336829 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:20.836244 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:20.836336 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:20.836642 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:21.336253 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:21.336338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:21.336707 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:21.836309 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:21.836398 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:21.836758 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:21.836814 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:22.336543 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:22.336624 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:22.336925 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:22.836625 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:22.836707 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:22.837057 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:23.336641 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:23.336724 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:23.337073 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:23.836556 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:23.836634 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:23.836903 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:23.836945 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:24.336275 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:24.336357 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:24.336645 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:24.836238 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:24.836349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:24.836732 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:25.336455 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:25.336529 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:25.336850 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:25.836272 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:25.836354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:25.836679 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:26.336762 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:26.336843 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:26.337194 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:26.337248 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:26.836530 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:26.836634 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:26.836949 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:27.337082 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:27.337168 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:27.337523 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:27.836265 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:27.836347 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:27.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:28.336192 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:28.336265 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:28.336562 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:28.836214 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:28.836315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:28.836676 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:28.836744 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:29.336414 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:29.336563 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:29.336947 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:29.836198 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:29.836288 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:29.836614 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:30.336242 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:30.336318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:30.336656 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:30.836210 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:30.836286 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:30.836612 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:31.336280 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:31.336362 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:31.336639 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:31.336684 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:31.836265 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:31.836343 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:31.836692 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:32.336488 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:32.336567 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:32.336863 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:32.836173 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:32.836265 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:32.836578 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:33.336248 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:33.336322 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:33.336687 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:33.336748 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:33.836273 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:33.836362 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:33.836704 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:34.336406 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:34.336478 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:34.336748 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:34.836467 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:34.836551 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:34.836857 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:35.336588 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:35.336668 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:35.337027 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:35.337086 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:35.836514 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:35.836585 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:35.836913 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:36.336967 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:36.337041 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:36.337376 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:36.837202 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:36.837285 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:36.837584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:37.336502 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:37.336591 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:37.336896 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:37.836591 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:37.836694 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:37.837046 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:37.837115 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:38.336899 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:38.336971 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:38.337328 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:38.837050 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:38.837126 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:38.837404 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:39.336160 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:39.336232 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:39.336580 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:39.836289 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:39.836382 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:39.836703 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:40.336371 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:40.336443 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:40.336707 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:40.336759 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:40.836239 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:40.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:40.836655 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:41.336240 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:41.336320 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:41.336686 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:41.836231 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:41.836305 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:41.836611 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:42.336623 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:42.336717 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:42.337080 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:42.337132 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:42.836776 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:42.836862 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:42.837178 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:43.336512 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:43.336586 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:43.336846 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:43.836233 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:43.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:43.836685 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:44.336266 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:44.336339 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:44.336641 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:44.836273 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:44.836339 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:44.836623 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:44.836676 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:45.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:45.336326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:45.336676 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:45.836517 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:45.836597 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:45.836920 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:46.336894 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:46.336967 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:46.337224 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:46.837014 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:46.837094 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:46.837437 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:46.837490 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:47.336253 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:47.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:47.336670 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:47.836235 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:47.836302 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:47.836610 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:48.336235 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:48.336318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:48.336671 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:48.836257 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:48.836337 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:48.836678 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:49.336349 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:49.336431 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:49.336732 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:49.336821 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:49.836214 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:49.836293 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:49.836634 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:50.336226 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:50.336307 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:50.336635 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:50.836333 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:50.836410 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:50.836688 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:51.336241 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:51.336326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:51.336678 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:51.836396 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:51.836479 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:51.836771 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:51.836817 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:52.336516 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:52.336593 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:52.336852 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:52.836252 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:52.836340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:52.836773 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:53.336515 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:53.336595 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:53.336935 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:53.836510 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:53.836587 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:53.836851 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:53.836896 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:54.336367 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:54.336467 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:54.336808 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:54.836267 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:54.836343 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:54.836686 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:55.336171 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:55.336242 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:55.336562 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:55.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:55.836320 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:55.836689 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:56.336624 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:56.336725 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:56.337092 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:56.337153 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:56.836464 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:56.836539 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:56.836794 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:57.336513 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:57.336595 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:57.336897 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:57.836221 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:57.836300 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:57.836664 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:58.336100 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:58.336175 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:58.336496 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:58.836220 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:58.836305 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:58.836646 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:58.836706 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:59.336458 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:59.336535 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:59.336905 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:59.836288 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:59.836366 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:59.836722 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:00.336435 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:00.336516 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:00.336842 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:00.836803 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:00.836881 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:00.837232 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:00.837290 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:01.336546 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:01.336620 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:01.336919 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:01.836631 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:01.836717 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:01.837061 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:02.336921 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:02.337000 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:02.337379 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:02.837188 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:02.837257 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:02.837522 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:02.837565 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:03.336219 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:03.336301 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:03.336654 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:03.836248 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:03.836325 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:03.836635 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:04.336178 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:04.336251 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:04.336567 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:04.836232 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:04.836318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:04.836669 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:05.336242 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:05.336317 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:05.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:05.336713 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:05.836366 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:05.836448 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:05.836735 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:06.336637 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:06.336720 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:06.337074 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:06.836743 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:06.836817 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:06.837172 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:07.336998 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:07.337074 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:07.337343 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:07.337395 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:07.837167 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:07.837242 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:07.837584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:08.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:08.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:08.336647 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:08.836178 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:08.836256 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:08.836598 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:09.336224 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:09.336297 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:09.336632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:09.836244 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:09.836321 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:09.836675 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:09.836731 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:10.336173 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:10.336248 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:10.336521 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:10.836203 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:10.836283 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:10.836642 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:11.336345 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:11.336437 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:11.336802 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:11.836493 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:11.836567 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:11.836846 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:11.836897 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:12.336745 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:12.336822 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:12.337164 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:12.836822 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:12.836903 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:12.837329 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:13.337068 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:13.337137 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:13.337477 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:13.836207 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:13.836286 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:13.836668 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:14.336241 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:14.336327 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:14.336632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:14.336679 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:14.836300 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:14.836375 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:14.836649 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:15.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:15.336332 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:15.336669 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:15.836251 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:15.836333 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:15.836683 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:16.336651 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:16.336729 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:16.337093 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:16.337145 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:16.836909 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:16.836992 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:16.837356 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:17.336137 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:17.336212 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:17.336571 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:17.836247 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:17.836315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:17.836610 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:18.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:18.336326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:18.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:18.836230 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:18.836305 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:18.836647 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:18.836705 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:19.336364 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:19.336454 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:19.336797 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:19.836215 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:19.836288 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:19.836625 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:20.336325 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:20.336403 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:20.336754 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:20.836274 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:20.836352 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:20.836687 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:20.836744 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:21.336273 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:21.336350 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:21.336700 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:21.836398 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:21.836482 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:21.836816 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:22.336510 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:22.336583 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:22.336841 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:22.836211 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:22.836292 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:22.836650 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:23.336238 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:23.336314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:23.336696 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:23.336754 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:23.836429 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:23.836502 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:23.836789 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:24.336496 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:24.336580 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:24.336961 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:24.836574 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:24.836650 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:24.836988 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:25.336500 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:25.336566 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:25.336817 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:25.336861 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:25.836628 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:25.836709 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:25.837047 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:26.337039 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:26.337121 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:26.337470 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:26.836162 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:26.836244 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:26.836581 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:27.336591 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:27.336672 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:27.337011 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:27.337065 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:27.836601 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:27.836681 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:27.837000 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:28.336523 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:28.336604 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:28.336874 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:28.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:28.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:28.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:29.336385 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:29.336497 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:29.336839 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:29.836183 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:29.836257 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:29.836558 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:29.836608 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:30.336289 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:30.336367 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:30.336681 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:30.836223 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:30.836310 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:30.836632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:31.336179 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:31.336247 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:31.336520 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:31.836214 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:31.836288 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:31.836631 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:31.836685 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:32.336406 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:32.336490 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:32.336839 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:32.836185 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:32.836252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:32.836552 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:33.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:33.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:33.336778 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:33.836367 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:33.836492 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:33.836841 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:33.836899 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:34.336602 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:34.336672 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:34.336962 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:34.836466 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:34.836542 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:34.836843 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:35.336251 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:35.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:35.336690 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:35.836215 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:35.836283 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:35.836600 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:36.336641 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:36.336716 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:36.337095 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:36.337155 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:36.836776 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:36.836857 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:36.837203 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:37.337030 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:37.337151 1297065 node_ready.go:38] duration metric: took 6m0.001157945s for node "functional-562018" to be "Ready" ...
	I1213 14:55:37.340291 1297065 out.go:203] 
	W1213 14:55:37.343143 1297065 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 14:55:37.343162 1297065 out.go:285] * 
	W1213 14:55:37.345311 1297065 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 14:55:37.348302 1297065 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.839061081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.839082069Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.839142982Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.839165489Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.839181579Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.839196856Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.839210362Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.839227009Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.839247973Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.839286208Z" level=info msg="Connect containerd service"
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.839634951Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.840751317Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.850265604Z" level=info msg="Start subscribing containerd event"
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.850350033Z" level=info msg="Start recovering state"
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.850594999Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.850703108Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.886139866Z" level=info msg="Start event monitor"
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.886335201Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.886398699Z" level=info msg="Start streaming server"
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.886467719Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.886526179Z" level=info msg="runtime interface starting up..."
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.886580873Z" level=info msg="starting plugins..."
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.886640704Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 14:49:34 functional-562018 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 14:49:34 functional-562018 containerd[5205]: time="2025-12-13T14:49:34.893206436Z" level=info msg="containerd successfully booted in 0.076868s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:55:41.331601    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:55:41.332075    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:55:41.333935    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:55:41.334385    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:55:41.336107    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.759368] overlayfs: idmapped layers are currently not supported
	[Dec13 13:37] overlayfs: idmapped layers are currently not supported
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 14:55:41 up  6:38,  0 user,  load average: 0.20, 0.27, 0.75
	Linux functional-562018 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 14:55:38 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 14:55:38 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 13 14:55:38 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:38 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:38 functional-562018 kubelet[8398]: E1213 14:55:38.897106    8398 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 14:55:38 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 14:55:38 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 14:55:39 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 13 14:55:39 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:39 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:39 functional-562018 kubelet[8470]: E1213 14:55:39.650158    8470 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 14:55:39 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 14:55:39 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 14:55:40 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 813.
	Dec 13 14:55:40 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:40 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:40 functional-562018 kubelet[8492]: E1213 14:55:40.386339    8492 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 14:55:40 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 14:55:40 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 14:55:41 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 814.
	Dec 13 14:55:41 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:41 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:41 functional-562018 kubelet[8541]: E1213 14:55:41.150535    8541 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 14:55:41 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 14:55:41 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018: exit status 2 (413.807387ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-562018" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 kubectl -- --context functional-562018 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562018 kubectl -- --context functional-562018 get pods: exit status 1 (117.416143ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-562018 kubectl -- --context functional-562018 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-562018
helpers_test.go:244: (dbg) docker inspect functional-562018:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
	        "Created": "2025-12-13T14:41:15.451086653Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1291703,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T14:41:15.527927053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hostname",
	        "HostsPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hosts",
	        "LogPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648-json.log",
	        "Name": "/functional-562018",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-562018:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-562018",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
	                "LowerDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-562018",
	                "Source": "/var/lib/docker/volumes/functional-562018/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-562018",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-562018",
	                "name.minikube.sigs.k8s.io": "functional-562018",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f4b22297a29553cdd0dbc4eaa766abcdb1e67465ee18f1e2f5f9917dc8cf6d08",
	            "SandboxKey": "/var/run/docker/netns/f4b22297a295",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33918"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33919"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33922"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33920"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33921"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-562018": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:f3:95:ff:30:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd2e7aed753870546b4bccdeed8073ad5795c14334e906d06b964f72dc448c38",
	                    "EndpointID": "a0e947c1e40f773105c811b67b7d1d63f19d3a20060380bbde944bf9bfe39be5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-562018",
	                        "2cd1277ca783"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018: exit status 2 (298.100502ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-831661 image ls --format json --alsologtostderr                                                                                              │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image   │ functional-831661 image ls --format short --alsologtostderr                                                                                             │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image   │ functional-831661 image ls --format table --alsologtostderr                                                                                             │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ ssh     │ functional-831661 ssh pgrep buildkitd                                                                                                                   │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │                     │
	│ image   │ functional-831661 image ls --format yaml --alsologtostderr                                                                                              │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image   │ functional-831661 image build -t localhost/my-image:functional-831661 testdata/build --alsologtostderr                                                  │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image   │ functional-831661 image ls                                                                                                                              │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ delete  │ -p functional-831661                                                                                                                                    │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ start   │ -p functional-562018 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │                     │
	│ start   │ -p functional-562018 --alsologtostderr -v=8                                                                                                             │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:49 UTC │                     │
	│ cache   │ functional-562018 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ functional-562018 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ functional-562018 cache add registry.k8s.io/pause:latest                                                                                                │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ functional-562018 cache add minikube-local-cache-test:functional-562018                                                                                 │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ functional-562018 cache delete minikube-local-cache-test:functional-562018                                                                              │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ ssh     │ functional-562018 ssh sudo crictl images                                                                                                                │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ ssh     │ functional-562018 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ ssh     │ functional-562018 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │                     │
	│ cache   │ functional-562018 cache reload                                                                                                                          │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ ssh     │ functional-562018 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ kubectl │ functional-562018 kubectl -- --context functional-562018 get pods                                                                                       │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 14:49:32
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 14:49:32.175934 1297065 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:49:32.176062 1297065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:49:32.176074 1297065 out.go:374] Setting ErrFile to fd 2...
	I1213 14:49:32.176081 1297065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:49:32.176329 1297065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 14:49:32.176775 1297065 out.go:368] Setting JSON to false
	I1213 14:49:32.177662 1297065 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23521,"bootTime":1765613851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 14:49:32.177756 1297065 start.go:143] virtualization:  
	I1213 14:49:32.181250 1297065 out.go:179] * [functional-562018] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 14:49:32.184279 1297065 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 14:49:32.184349 1297065 notify.go:221] Checking for updates...
	I1213 14:49:32.190681 1297065 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:49:32.193733 1297065 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:32.196589 1297065 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 14:49:32.199444 1297065 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 14:49:32.202364 1297065 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 14:49:32.205680 1297065 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:49:32.205788 1297065 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:49:32.233101 1297065 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 14:49:32.233224 1297065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:49:32.299716 1297065 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 14:49:32.290425951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:49:32.299832 1297065 docker.go:319] overlay module found
	I1213 14:49:32.305094 1297065 out.go:179] * Using the docker driver based on existing profile
	I1213 14:49:32.307726 1297065 start.go:309] selected driver: docker
	I1213 14:49:32.307744 1297065 start.go:927] validating driver "docker" against &{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:49:32.307856 1297065 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 14:49:32.307958 1297065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:49:32.364202 1297065 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 14:49:32.354888078 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:49:32.364608 1297065 cni.go:84] Creating CNI manager for ""
	I1213 14:49:32.364673 1297065 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 14:49:32.364721 1297065 start.go:353] cluster config:
	{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:49:32.367887 1297065 out.go:179] * Starting "functional-562018" primary control-plane node in "functional-562018" cluster
	I1213 14:49:32.370579 1297065 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 14:49:32.373599 1297065 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 14:49:32.376553 1297065 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 14:49:32.376606 1297065 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 14:49:32.376621 1297065 cache.go:65] Caching tarball of preloaded images
	I1213 14:49:32.376630 1297065 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 14:49:32.376703 1297065 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 14:49:32.376713 1297065 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 14:49:32.376820 1297065 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/config.json ...
	I1213 14:49:32.396105 1297065 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 14:49:32.396128 1297065 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 14:49:32.396160 1297065 cache.go:243] Successfully downloaded all kic artifacts
	I1213 14:49:32.396191 1297065 start.go:360] acquireMachinesLock for functional-562018: {Name:mk6a7956e4fce5d8e0f4d6fe039ab67ad6cd688b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 14:49:32.396254 1297065 start.go:364] duration metric: took 40.319µs to acquireMachinesLock for "functional-562018"
	I1213 14:49:32.396277 1297065 start.go:96] Skipping create...Using existing machine configuration
	I1213 14:49:32.396287 1297065 fix.go:54] fixHost starting: 
	I1213 14:49:32.396543 1297065 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:49:32.413077 1297065 fix.go:112] recreateIfNeeded on functional-562018: state=Running err=<nil>
	W1213 14:49:32.413105 1297065 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 14:49:32.416298 1297065 out.go:252] * Updating the running docker "functional-562018" container ...
	I1213 14:49:32.416337 1297065 machine.go:94] provisionDockerMachine start ...
	I1213 14:49:32.416434 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:32.434428 1297065 main.go:143] libmachine: Using SSH client type: native
	I1213 14:49:32.434755 1297065 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:49:32.434764 1297065 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 14:49:32.588560 1297065 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-562018
	
	I1213 14:49:32.588587 1297065 ubuntu.go:182] provisioning hostname "functional-562018"
	I1213 14:49:32.588651 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:32.607983 1297065 main.go:143] libmachine: Using SSH client type: native
	I1213 14:49:32.608286 1297065 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:49:32.608297 1297065 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-562018 && echo "functional-562018" | sudo tee /etc/hostname
	I1213 14:49:32.769183 1297065 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-562018
	
	I1213 14:49:32.769274 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:32.789428 1297065 main.go:143] libmachine: Using SSH client type: native
	I1213 14:49:32.789750 1297065 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:49:32.789773 1297065 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-562018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-562018/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-562018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 14:49:32.943886 1297065 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 14:49:32.943914 1297065 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 14:49:32.943934 1297065 ubuntu.go:190] setting up certificates
	I1213 14:49:32.943953 1297065 provision.go:84] configureAuth start
	I1213 14:49:32.944016 1297065 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
	I1213 14:49:32.962011 1297065 provision.go:143] copyHostCerts
	I1213 14:49:32.962065 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 14:49:32.962109 1297065 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 14:49:32.962123 1297065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 14:49:32.962204 1297065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 14:49:32.962309 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 14:49:32.962331 1297065 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 14:49:32.962339 1297065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 14:49:32.962367 1297065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 14:49:32.962422 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 14:49:32.962443 1297065 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 14:49:32.962451 1297065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 14:49:32.962476 1297065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 14:49:32.962539 1297065 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.functional-562018 san=[127.0.0.1 192.168.49.2 functional-562018 localhost minikube]
	I1213 14:49:33.179564 1297065 provision.go:177] copyRemoteCerts
	I1213 14:49:33.179638 1297065 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 14:49:33.179690 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.200012 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.307268 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 14:49:33.307352 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 14:49:33.325080 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 14:49:33.325187 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 14:49:33.348055 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 14:49:33.348124 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 14:49:33.368733 1297065 provision.go:87] duration metric: took 424.756928ms to configureAuth
	I1213 14:49:33.368776 1297065 ubuntu.go:206] setting minikube options for container-runtime
	I1213 14:49:33.368958 1297065 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:49:33.368972 1297065 machine.go:97] duration metric: took 952.628419ms to provisionDockerMachine
	I1213 14:49:33.368979 1297065 start.go:293] postStartSetup for "functional-562018" (driver="docker")
	I1213 14:49:33.368990 1297065 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 14:49:33.369043 1297065 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 14:49:33.369100 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.388800 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.495227 1297065 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 14:49:33.498339 1297065 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 14:49:33.498360 1297065 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 14:49:33.498365 1297065 command_runner.go:130] > VERSION_ID="12"
	I1213 14:49:33.498369 1297065 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 14:49:33.498374 1297065 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 14:49:33.498378 1297065 command_runner.go:130] > ID=debian
	I1213 14:49:33.498382 1297065 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 14:49:33.498387 1297065 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 14:49:33.498400 1297065 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 14:49:33.498729 1297065 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 14:49:33.498752 1297065 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 14:49:33.498764 1297065 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 14:49:33.498818 1297065 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 14:49:33.498907 1297065 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 14:49:33.498914 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> /etc/ssl/certs/12529342.pem
	I1213 14:49:33.498991 1297065 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts -> hosts in /etc/test/nested/copy/1252934
	I1213 14:49:33.498996 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts -> /etc/test/nested/copy/1252934/hosts
	I1213 14:49:33.499038 1297065 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1252934
	I1213 14:49:33.506503 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 14:49:33.524063 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts --> /etc/test/nested/copy/1252934/hosts (40 bytes)
	I1213 14:49:33.542234 1297065 start.go:296] duration metric: took 173.238726ms for postStartSetup
	I1213 14:49:33.542347 1297065 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 14:49:33.542395 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.560689 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.668283 1297065 command_runner.go:130] > 18%
	I1213 14:49:33.668429 1297065 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 14:49:33.673015 1297065 command_runner.go:130] > 160G
	I1213 14:49:33.673516 1297065 fix.go:56] duration metric: took 1.277224674s for fixHost
	I1213 14:49:33.673545 1297065 start.go:83] releasing machines lock for "functional-562018", held for 1.277279647s
	I1213 14:49:33.673651 1297065 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
	I1213 14:49:33.691077 1297065 ssh_runner.go:195] Run: cat /version.json
	I1213 14:49:33.691140 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.691468 1297065 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 14:49:33.691538 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.709148 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.719417 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.814811 1297065 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 14:49:33.814943 1297065 ssh_runner.go:195] Run: systemctl --version
	I1213 14:49:33.903672 1297065 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 14:49:33.906947 1297065 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 14:49:33.906982 1297065 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 14:49:33.907055 1297065 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 14:49:33.911546 1297065 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 14:49:33.911590 1297065 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 14:49:33.911661 1297065 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 14:49:33.919539 1297065 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 14:49:33.919560 1297065 start.go:496] detecting cgroup driver to use...
	I1213 14:49:33.919591 1297065 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 14:49:33.919652 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 14:49:33.935466 1297065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 14:49:33.948503 1297065 docker.go:218] disabling cri-docker service (if available) ...
	I1213 14:49:33.948565 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 14:49:33.964251 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 14:49:33.977532 1297065 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 14:49:34.098935 1297065 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 14:49:34.240532 1297065 docker.go:234] disabling docker service ...
	I1213 14:49:34.240643 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 14:49:34.257037 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 14:49:34.270650 1297065 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 14:49:34.390022 1297065 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 14:49:34.521564 1297065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 14:49:34.535848 1297065 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 14:49:34.549721 1297065 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1213 14:49:34.551043 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 14:49:34.560293 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 14:49:34.569539 1297065 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 14:49:34.569607 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 14:49:34.578725 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 14:49:34.587464 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 14:49:34.595867 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 14:49:34.604914 1297065 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 14:49:34.612837 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 14:49:34.621746 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 14:49:34.631405 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 14:49:34.640934 1297065 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 14:49:34.647949 1297065 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 14:49:34.649110 1297065 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 14:49:34.656959 1297065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:49:34.763520 1297065 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 14:49:34.891785 1297065 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 14:49:34.891886 1297065 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 14:49:34.896000 1297065 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1213 14:49:34.896045 1297065 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 14:49:34.896074 1297065 command_runner.go:130] > Device: 0,72	Inode: 1612        Links: 1
	I1213 14:49:34.896088 1297065 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 14:49:34.896099 1297065 command_runner.go:130] > Access: 2025-12-13 14:49:34.846323232 +0000
	I1213 14:49:34.896109 1297065 command_runner.go:130] > Modify: 2025-12-13 14:49:34.846323232 +0000
	I1213 14:49:34.896114 1297065 command_runner.go:130] > Change: 2025-12-13 14:49:34.846323232 +0000
	I1213 14:49:34.896117 1297065 command_runner.go:130] >  Birth: -
	I1213 14:49:34.896860 1297065 start.go:564] Will wait 60s for crictl version
	I1213 14:49:34.896947 1297065 ssh_runner.go:195] Run: which crictl
	I1213 14:49:34.901248 1297065 command_runner.go:130] > /usr/local/bin/crictl
	I1213 14:49:34.901933 1297065 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 14:49:34.925912 1297065 command_runner.go:130] > Version:  0.1.0
	I1213 14:49:34.925937 1297065 command_runner.go:130] > RuntimeName:  containerd
	I1213 14:49:34.925943 1297065 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1213 14:49:34.925948 1297065 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 14:49:34.928438 1297065 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 14:49:34.928554 1297065 ssh_runner.go:195] Run: containerd --version
	I1213 14:49:34.949487 1297065 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 14:49:34.951799 1297065 ssh_runner.go:195] Run: containerd --version
	I1213 14:49:34.970090 1297065 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 14:49:34.977895 1297065 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 14:49:34.980777 1297065 cli_runner.go:164] Run: docker network inspect functional-562018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 14:49:34.997091 1297065 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 14:49:35.003196 1297065 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 14:49:35.003415 1297065 kubeadm.go:884] updating cluster {Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 14:49:35.003575 1297065 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 14:49:35.003657 1297065 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:49:35.028469 1297065 command_runner.go:130] > {
	I1213 14:49:35.028488 1297065 command_runner.go:130] >   "images":  [
	I1213 14:49:35.028493 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028502 1297065 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 14:49:35.028509 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028514 1297065 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 14:49:35.028518 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028522 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028533 1297065 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 14:49:35.028536 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028541 1297065 command_runner.go:130] >       "size":  "40636774",
	I1213 14:49:35.028545 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028549 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028552 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028555 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028563 1297065 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 14:49:35.028567 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028572 1297065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 14:49:35.028574 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028583 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028592 1297065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 14:49:35.028595 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028599 1297065 command_runner.go:130] >       "size":  "8034419",
	I1213 14:49:35.028603 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028607 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028610 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028613 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028620 1297065 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 14:49:35.028624 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028630 1297065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 14:49:35.028633 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028641 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028649 1297065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 14:49:35.028652 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028656 1297065 command_runner.go:130] >       "size":  "21168808",
	I1213 14:49:35.028660 1297065 command_runner.go:130] >       "username":  "nonroot",
	I1213 14:49:35.028664 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028667 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028670 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028677 1297065 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 14:49:35.028680 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028685 1297065 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 14:49:35.028688 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028691 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028698 1297065 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 14:49:35.028701 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028706 1297065 command_runner.go:130] >       "size":  "21136588",
	I1213 14:49:35.028710 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.028714 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.028717 1297065 command_runner.go:130] >       },
	I1213 14:49:35.028721 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028725 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028731 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028734 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028741 1297065 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 14:49:35.028745 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028750 1297065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 14:49:35.028753 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028757 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028764 1297065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 14:49:35.028768 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028772 1297065 command_runner.go:130] >       "size":  "24678359",
	I1213 14:49:35.028775 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.028783 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.028786 1297065 command_runner.go:130] >       },
	I1213 14:49:35.028790 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028794 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028797 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028799 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028806 1297065 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 14:49:35.028809 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028815 1297065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 14:49:35.028818 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028822 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028829 1297065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 14:49:35.028833 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028837 1297065 command_runner.go:130] >       "size":  "20661043",
	I1213 14:49:35.028841 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.028844 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.028847 1297065 command_runner.go:130] >       },
	I1213 14:49:35.028852 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028855 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028858 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028861 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028867 1297065 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 14:49:35.028877 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028883 1297065 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 14:49:35.028886 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028890 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028897 1297065 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 14:49:35.028900 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028905 1297065 command_runner.go:130] >       "size":  "22429671",
	I1213 14:49:35.028908 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028912 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028915 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028919 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028926 1297065 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 14:49:35.028929 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028934 1297065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 14:49:35.028937 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028941 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028948 1297065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 14:49:35.028951 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028955 1297065 command_runner.go:130] >       "size":  "15391364",
	I1213 14:49:35.028959 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.028962 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.028965 1297065 command_runner.go:130] >       },
	I1213 14:49:35.028969 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028972 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028975 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028978 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028984 1297065 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 14:49:35.028987 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028992 1297065 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 14:49:35.028995 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028998 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.029005 1297065 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 14:49:35.029009 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.029016 1297065 command_runner.go:130] >       "size":  "267939",
	I1213 14:49:35.029019 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.029023 1297065 command_runner.go:130] >         "value":  "65535"
	I1213 14:49:35.029030 1297065 command_runner.go:130] >       },
	I1213 14:49:35.029034 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.029037 1297065 command_runner.go:130] >       "pinned":  true
	I1213 14:49:35.029040 1297065 command_runner.go:130] >     }
	I1213 14:49:35.029043 1297065 command_runner.go:130] >   ]
	I1213 14:49:35.029046 1297065 command_runner.go:130] > }
	I1213 14:49:35.031562 1297065 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 14:49:35.031587 1297065 containerd.go:534] Images already preloaded, skipping extraction
	I1213 14:49:35.031647 1297065 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:49:35.054892 1297065 command_runner.go:130] > {
	I1213 14:49:35.054913 1297065 command_runner.go:130] >   "images":  [
	I1213 14:49:35.054918 1297065 command_runner.go:130] >     {
	I1213 14:49:35.054928 1297065 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 14:49:35.054933 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.054939 1297065 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 14:49:35.054943 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.054947 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.054959 1297065 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 14:49:35.054966 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.054970 1297065 command_runner.go:130] >       "size":  "40636774",
	I1213 14:49:35.054977 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.054982 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.054993 1297065 command_runner.go:130] >     },
	I1213 14:49:35.054996 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055014 1297065 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 14:49:35.055021 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055030 1297065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 14:49:35.055033 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055037 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055045 1297065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 14:49:35.055049 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055053 1297065 command_runner.go:130] >       "size":  "8034419",
	I1213 14:49:35.055057 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055060 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055064 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055067 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055074 1297065 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 14:49:35.055081 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055086 1297065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 14:49:35.055092 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055104 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055117 1297065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 14:49:35.055121 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055125 1297065 command_runner.go:130] >       "size":  "21168808",
	I1213 14:49:35.055135 1297065 command_runner.go:130] >       "username":  "nonroot",
	I1213 14:49:35.055139 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055143 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055151 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055158 1297065 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 14:49:35.055162 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055169 1297065 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 14:49:35.055173 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055177 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055187 1297065 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 14:49:35.055193 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055201 1297065 command_runner.go:130] >       "size":  "21136588",
	I1213 14:49:35.055205 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055210 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.055217 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055221 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055225 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055231 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055234 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055241 1297065 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 14:49:35.055246 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055254 1297065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 14:49:35.055257 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055261 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055272 1297065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 14:49:35.055278 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055283 1297065 command_runner.go:130] >       "size":  "24678359",
	I1213 14:49:35.055286 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055294 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.055300 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055304 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055329 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055335 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055339 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055346 1297065 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 14:49:35.055352 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055358 1297065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 14:49:35.055371 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055375 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055383 1297065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 14:49:35.055388 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055392 1297065 command_runner.go:130] >       "size":  "20661043",
	I1213 14:49:35.055399 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055403 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.055410 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055415 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055422 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055425 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055428 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055435 1297065 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 14:49:35.055446 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055452 1297065 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 14:49:35.055455 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055460 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055469 1297065 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 14:49:35.055477 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055482 1297065 command_runner.go:130] >       "size":  "22429671",
	I1213 14:49:35.055486 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055494 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055497 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055500 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055511 1297065 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 14:49:35.055515 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055524 1297065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 14:49:35.055529 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055533 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055541 1297065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 14:49:35.055547 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055551 1297065 command_runner.go:130] >       "size":  "15391364",
	I1213 14:49:35.055554 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055559 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.055564 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055568 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055574 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055578 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055581 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055587 1297065 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 14:49:35.055595 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055602 1297065 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 14:49:35.055608 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055612 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055620 1297065 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 14:49:35.055626 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055630 1297065 command_runner.go:130] >       "size":  "267939",
	I1213 14:49:35.055633 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055637 1297065 command_runner.go:130] >         "value":  "65535"
	I1213 14:49:35.055651 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055655 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055659 1297065 command_runner.go:130] >       "pinned":  true
	I1213 14:49:35.055662 1297065 command_runner.go:130] >     }
	I1213 14:49:35.055666 1297065 command_runner.go:130] >   ]
	I1213 14:49:35.055669 1297065 command_runner.go:130] > }
	I1213 14:49:35.057995 1297065 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 14:49:35.058021 1297065 cache_images.go:86] Images are preloaded, skipping loading
	I1213 14:49:35.058031 1297065 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 14:49:35.058154 1297065 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-562018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 14:49:35.058232 1297065 ssh_runner.go:195] Run: sudo crictl info
	I1213 14:49:35.082362 1297065 command_runner.go:130] > {
	I1213 14:49:35.082385 1297065 command_runner.go:130] >   "cniconfig": {
	I1213 14:49:35.082391 1297065 command_runner.go:130] >     "Networks": [
	I1213 14:49:35.082395 1297065 command_runner.go:130] >       {
	I1213 14:49:35.082401 1297065 command_runner.go:130] >         "Config": {
	I1213 14:49:35.082405 1297065 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1213 14:49:35.082411 1297065 command_runner.go:130] >           "Name": "cni-loopback",
	I1213 14:49:35.082415 1297065 command_runner.go:130] >           "Plugins": [
	I1213 14:49:35.082419 1297065 command_runner.go:130] >             {
	I1213 14:49:35.082423 1297065 command_runner.go:130] >               "Network": {
	I1213 14:49:35.082427 1297065 command_runner.go:130] >                 "ipam": {},
	I1213 14:49:35.082432 1297065 command_runner.go:130] >                 "type": "loopback"
	I1213 14:49:35.082436 1297065 command_runner.go:130] >               },
	I1213 14:49:35.082446 1297065 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1213 14:49:35.082450 1297065 command_runner.go:130] >             }
	I1213 14:49:35.082457 1297065 command_runner.go:130] >           ],
	I1213 14:49:35.082467 1297065 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1213 14:49:35.082473 1297065 command_runner.go:130] >         },
	I1213 14:49:35.082488 1297065 command_runner.go:130] >         "IFName": "lo"
	I1213 14:49:35.082495 1297065 command_runner.go:130] >       }
	I1213 14:49:35.082498 1297065 command_runner.go:130] >     ],
	I1213 14:49:35.082503 1297065 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1213 14:49:35.082507 1297065 command_runner.go:130] >     "PluginDirs": [
	I1213 14:49:35.082511 1297065 command_runner.go:130] >       "/opt/cni/bin"
	I1213 14:49:35.082516 1297065 command_runner.go:130] >     ],
	I1213 14:49:35.082520 1297065 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1213 14:49:35.082527 1297065 command_runner.go:130] >     "Prefix": "eth"
	I1213 14:49:35.082530 1297065 command_runner.go:130] >   },
	I1213 14:49:35.082533 1297065 command_runner.go:130] >   "config": {
	I1213 14:49:35.082537 1297065 command_runner.go:130] >     "cdiSpecDirs": [
	I1213 14:49:35.082544 1297065 command_runner.go:130] >       "/etc/cdi",
	I1213 14:49:35.082549 1297065 command_runner.go:130] >       "/var/run/cdi"
	I1213 14:49:35.082552 1297065 command_runner.go:130] >     ],
	I1213 14:49:35.082559 1297065 command_runner.go:130] >     "cni": {
	I1213 14:49:35.082562 1297065 command_runner.go:130] >       "binDir": "",
	I1213 14:49:35.082566 1297065 command_runner.go:130] >       "binDirs": [
	I1213 14:49:35.082570 1297065 command_runner.go:130] >         "/opt/cni/bin"
	I1213 14:49:35.082573 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.082578 1297065 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1213 14:49:35.082581 1297065 command_runner.go:130] >       "confTemplate": "",
	I1213 14:49:35.082586 1297065 command_runner.go:130] >       "ipPref": "",
	I1213 14:49:35.082589 1297065 command_runner.go:130] >       "maxConfNum": 1,
	I1213 14:49:35.082593 1297065 command_runner.go:130] >       "setupSerially": false,
	I1213 14:49:35.082601 1297065 command_runner.go:130] >       "useInternalLoopback": false
	I1213 14:49:35.082604 1297065 command_runner.go:130] >     },
	I1213 14:49:35.082611 1297065 command_runner.go:130] >     "containerd": {
	I1213 14:49:35.082617 1297065 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1213 14:49:35.082622 1297065 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1213 14:49:35.082629 1297065 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1213 14:49:35.082634 1297065 command_runner.go:130] >       "runtimes": {
	I1213 14:49:35.082637 1297065 command_runner.go:130] >         "runc": {
	I1213 14:49:35.082648 1297065 command_runner.go:130] >           "ContainerAnnotations": null,
	I1213 14:49:35.082654 1297065 command_runner.go:130] >           "PodAnnotations": null,
	I1213 14:49:35.082659 1297065 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1213 14:49:35.082672 1297065 command_runner.go:130] >           "cgroupWritable": false,
	I1213 14:49:35.082676 1297065 command_runner.go:130] >           "cniConfDir": "",
	I1213 14:49:35.082680 1297065 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1213 14:49:35.082684 1297065 command_runner.go:130] >           "io_type": "",
	I1213 14:49:35.082688 1297065 command_runner.go:130] >           "options": {
	I1213 14:49:35.082693 1297065 command_runner.go:130] >             "BinaryName": "",
	I1213 14:49:35.082699 1297065 command_runner.go:130] >             "CriuImagePath": "",
	I1213 14:49:35.082703 1297065 command_runner.go:130] >             "CriuWorkPath": "",
	I1213 14:49:35.082707 1297065 command_runner.go:130] >             "IoGid": 0,
	I1213 14:49:35.082714 1297065 command_runner.go:130] >             "IoUid": 0,
	I1213 14:49:35.082719 1297065 command_runner.go:130] >             "NoNewKeyring": false,
	I1213 14:49:35.082725 1297065 command_runner.go:130] >             "Root": "",
	I1213 14:49:35.082729 1297065 command_runner.go:130] >             "ShimCgroup": "",
	I1213 14:49:35.082743 1297065 command_runner.go:130] >             "SystemdCgroup": false
	I1213 14:49:35.082746 1297065 command_runner.go:130] >           },
	I1213 14:49:35.082751 1297065 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1213 14:49:35.082758 1297065 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1213 14:49:35.082765 1297065 command_runner.go:130] >           "runtimePath": "",
	I1213 14:49:35.082769 1297065 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1213 14:49:35.082774 1297065 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1213 14:49:35.082778 1297065 command_runner.go:130] >           "snapshotter": ""
	I1213 14:49:35.082784 1297065 command_runner.go:130] >         }
	I1213 14:49:35.082787 1297065 command_runner.go:130] >       }
	I1213 14:49:35.082790 1297065 command_runner.go:130] >     },
	I1213 14:49:35.082801 1297065 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1213 14:49:35.082809 1297065 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1213 14:49:35.082816 1297065 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1213 14:49:35.082820 1297065 command_runner.go:130] >     "disableApparmor": false,
	I1213 14:49:35.082825 1297065 command_runner.go:130] >     "disableHugetlbController": true,
	I1213 14:49:35.082832 1297065 command_runner.go:130] >     "disableProcMount": false,
	I1213 14:49:35.082839 1297065 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1213 14:49:35.082845 1297065 command_runner.go:130] >     "enableCDI": true,
	I1213 14:49:35.082850 1297065 command_runner.go:130] >     "enableSelinux": false,
	I1213 14:49:35.082857 1297065 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1213 14:49:35.082862 1297065 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1213 14:49:35.082866 1297065 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1213 14:49:35.082871 1297065 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1213 14:49:35.082875 1297065 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1213 14:49:35.082880 1297065 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1213 14:49:35.082887 1297065 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1213 14:49:35.082893 1297065 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1213 14:49:35.082904 1297065 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1213 14:49:35.082910 1297065 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1213 14:49:35.082915 1297065 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1213 14:49:35.082926 1297065 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1213 14:49:35.082932 1297065 command_runner.go:130] >   },
	I1213 14:49:35.082936 1297065 command_runner.go:130] >   "features": {
	I1213 14:49:35.082943 1297065 command_runner.go:130] >     "supplemental_groups_policy": true
	I1213 14:49:35.082946 1297065 command_runner.go:130] >   },
	I1213 14:49:35.082950 1297065 command_runner.go:130] >   "golang": "go1.24.9",
	I1213 14:49:35.082959 1297065 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 14:49:35.082976 1297065 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 14:49:35.082980 1297065 command_runner.go:130] >   "runtimeHandlers": [
	I1213 14:49:35.082984 1297065 command_runner.go:130] >     {
	I1213 14:49:35.082988 1297065 command_runner.go:130] >       "features": {
	I1213 14:49:35.083000 1297065 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 14:49:35.083004 1297065 command_runner.go:130] >         "user_namespaces": true
	I1213 14:49:35.083008 1297065 command_runner.go:130] >       }
	I1213 14:49:35.083012 1297065 command_runner.go:130] >     },
	I1213 14:49:35.083017 1297065 command_runner.go:130] >     {
	I1213 14:49:35.083021 1297065 command_runner.go:130] >       "features": {
	I1213 14:49:35.083026 1297065 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 14:49:35.083033 1297065 command_runner.go:130] >         "user_namespaces": true
	I1213 14:49:35.083041 1297065 command_runner.go:130] >       },
	I1213 14:49:35.083055 1297065 command_runner.go:130] >       "name": "runc"
	I1213 14:49:35.083058 1297065 command_runner.go:130] >     }
	I1213 14:49:35.083061 1297065 command_runner.go:130] >   ],
	I1213 14:49:35.083064 1297065 command_runner.go:130] >   "status": {
	I1213 14:49:35.083068 1297065 command_runner.go:130] >     "conditions": [
	I1213 14:49:35.083077 1297065 command_runner.go:130] >       {
	I1213 14:49:35.083081 1297065 command_runner.go:130] >         "message": "",
	I1213 14:49:35.083085 1297065 command_runner.go:130] >         "reason": "",
	I1213 14:49:35.083089 1297065 command_runner.go:130] >         "status": true,
	I1213 14:49:35.083098 1297065 command_runner.go:130] >         "type": "RuntimeReady"
	I1213 14:49:35.083104 1297065 command_runner.go:130] >       },
	I1213 14:49:35.083107 1297065 command_runner.go:130] >       {
	I1213 14:49:35.083113 1297065 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1213 14:49:35.083118 1297065 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1213 14:49:35.083122 1297065 command_runner.go:130] >         "status": false,
	I1213 14:49:35.083128 1297065 command_runner.go:130] >         "type": "NetworkReady"
	I1213 14:49:35.083132 1297065 command_runner.go:130] >       },
	I1213 14:49:35.083135 1297065 command_runner.go:130] >       {
	I1213 14:49:35.083160 1297065 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1213 14:49:35.083171 1297065 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1213 14:49:35.083176 1297065 command_runner.go:130] >         "status": false,
	I1213 14:49:35.083182 1297065 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1213 14:49:35.083186 1297065 command_runner.go:130] >       }
	I1213 14:49:35.083190 1297065 command_runner.go:130] >     ]
	I1213 14:49:35.083196 1297065 command_runner.go:130] >   }
	I1213 14:49:35.083199 1297065 command_runner.go:130] > }
	I1213 14:49:35.086343 1297065 cni.go:84] Creating CNI manager for ""
	I1213 14:49:35.086370 1297065 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 14:49:35.086397 1297065 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 14:49:35.086420 1297065 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-562018 NodeName:functional-562018 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 14:49:35.086540 1297065 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-562018"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 14:49:35.086621 1297065 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 14:49:35.094718 1297065 command_runner.go:130] > kubeadm
	I1213 14:49:35.094739 1297065 command_runner.go:130] > kubectl
	I1213 14:49:35.094743 1297065 command_runner.go:130] > kubelet
	I1213 14:49:35.094761 1297065 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 14:49:35.094814 1297065 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 14:49:35.102589 1297065 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 14:49:35.115905 1297065 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 14:49:35.129462 1297065 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 14:49:35.142335 1297065 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 14:49:35.146161 1297065 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 14:49:35.146280 1297065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:49:35.271079 1297065 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:49:35.585791 1297065 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018 for IP: 192.168.49.2
	I1213 14:49:35.585864 1297065 certs.go:195] generating shared ca certs ...
	I1213 14:49:35.585895 1297065 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:49:35.586063 1297065 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 14:49:35.586138 1297065 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 14:49:35.586175 1297065 certs.go:257] generating profile certs ...
	I1213 14:49:35.586327 1297065 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key
	I1213 14:49:35.586437 1297065 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key.d0505aee
	I1213 14:49:35.586523 1297065 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key
	I1213 14:49:35.586557 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 14:49:35.586602 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 14:49:35.586632 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 14:49:35.586672 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 14:49:35.586707 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 14:49:35.586737 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 14:49:35.586777 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 14:49:35.586811 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 14:49:35.586902 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 14:49:35.586962 1297065 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 14:49:35.586986 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 14:49:35.587046 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 14:49:35.587098 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 14:49:35.587157 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 14:49:35.587232 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 14:49:35.587302 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.587371 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem -> /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.587399 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.588006 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 14:49:35.609077 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 14:49:35.630697 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 14:49:35.652426 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 14:49:35.670342 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 14:49:35.687837 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 14:49:35.705877 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 14:49:35.723466 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 14:49:35.740679 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 14:49:35.758304 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 14:49:35.776736 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 14:49:35.794339 1297065 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 14:49:35.806740 1297065 ssh_runner.go:195] Run: openssl version
	I1213 14:49:35.812461 1297065 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 14:49:35.812883 1297065 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.820227 1297065 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 14:49:35.827978 1297065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.831610 1297065 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.831636 1297065 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.831688 1297065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.871766 1297065 command_runner.go:130] > b5213941
	I1213 14:49:35.872189 1297065 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 14:49:35.879531 1297065 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.886529 1297065 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 14:49:35.894015 1297065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.897550 1297065 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.897859 1297065 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.897930 1297065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.938203 1297065 command_runner.go:130] > 51391683
	I1213 14:49:35.938708 1297065 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 14:49:35.946069 1297065 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.953176 1297065 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 14:49:35.960486 1297065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.964477 1297065 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.964589 1297065 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.964665 1297065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 14:49:36.007360 1297065 command_runner.go:130] > 3ec20f2e
	I1213 14:49:36.007602 1297065 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 14:49:36.019390 1297065 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 14:49:36.024551 1297065 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 14:49:36.024587 1297065 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 14:49:36.024604 1297065 command_runner.go:130] > Device: 259,1	Inode: 2346070     Links: 1
	I1213 14:49:36.024612 1297065 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 14:49:36.024618 1297065 command_runner.go:130] > Access: 2025-12-13 14:45:28.579602026 +0000
	I1213 14:49:36.024623 1297065 command_runner.go:130] > Modify: 2025-12-13 14:41:24.464587226 +0000
	I1213 14:49:36.024628 1297065 command_runner.go:130] > Change: 2025-12-13 14:41:24.464587226 +0000
	I1213 14:49:36.024634 1297065 command_runner.go:130] >  Birth: 2025-12-13 14:41:24.464587226 +0000
	I1213 14:49:36.024743 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 14:49:36.067430 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.067964 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 14:49:36.109753 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.110299 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 14:49:36.151650 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.152123 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 14:49:36.199598 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.200366 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 14:49:36.241923 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.242478 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 14:49:36.282927 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.283387 1297065 kubeadm.go:401] StartCluster: {Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:49:36.283480 1297065 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 14:49:36.283586 1297065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 14:49:36.308975 1297065 cri.go:89] found id: ""
	I1213 14:49:36.309092 1297065 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 14:49:36.316103 1297065 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 14:49:36.316129 1297065 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 14:49:36.316138 1297065 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 14:49:36.317085 1297065 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 14:49:36.317145 1297065 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 14:49:36.317231 1297065 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 14:49:36.324724 1297065 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:49:36.325158 1297065 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-562018" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:36.325271 1297065 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-1251074/kubeconfig needs updating (will repair): [kubeconfig missing "functional-562018" cluster setting kubeconfig missing "functional-562018" context setting]
	I1213 14:49:36.325603 1297065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:49:36.326011 1297065 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:36.326154 1297065 kapi.go:59] client config for functional-562018: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key", CAFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 14:49:36.326701 1297065 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 14:49:36.326719 1297065 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 14:49:36.326724 1297065 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 14:49:36.326733 1297065 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 14:49:36.326744 1297065 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 14:49:36.327001 1297065 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 14:49:36.327093 1297065 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 14:49:36.334496 1297065 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 14:49:36.334531 1297065 kubeadm.go:602] duration metric: took 17.366177ms to restartPrimaryControlPlane
	I1213 14:49:36.334540 1297065 kubeadm.go:403] duration metric: took 51.160034ms to StartCluster
	I1213 14:49:36.334555 1297065 settings.go:142] acquiring lock: {Name:mk3e38eb9635dd950ddf8081cc0598a8c10049c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:49:36.334613 1297065 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:36.335214 1297065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:49:36.335450 1297065 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 14:49:36.335789 1297065 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:49:36.335866 1297065 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 14:49:36.335932 1297065 addons.go:70] Setting storage-provisioner=true in profile "functional-562018"
	I1213 14:49:36.335945 1297065 addons.go:239] Setting addon storage-provisioner=true in "functional-562018"
	I1213 14:49:36.335975 1297065 host.go:66] Checking if "functional-562018" exists ...
	I1213 14:49:36.336461 1297065 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:49:36.336835 1297065 addons.go:70] Setting default-storageclass=true in profile "functional-562018"
	I1213 14:49:36.336857 1297065 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-562018"
	I1213 14:49:36.337151 1297065 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:49:36.340699 1297065 out.go:179] * Verifying Kubernetes components...
	I1213 14:49:36.343477 1297065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:49:36.374082 1297065 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 14:49:36.376797 1297065 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:36.376892 1297065 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:36.376917 1297065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 14:49:36.376979 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:36.377245 1297065 kapi.go:59] client config for functional-562018: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key", CAFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 14:49:36.377532 1297065 addons.go:239] Setting addon default-storageclass=true in "functional-562018"
	I1213 14:49:36.377566 1297065 host.go:66] Checking if "functional-562018" exists ...
	I1213 14:49:36.377992 1297065 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:49:36.415567 1297065 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:36.415590 1297065 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 14:49:36.415656 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:36.416969 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:36.442534 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:36.534721 1297065 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:49:36.592567 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:36.600370 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:37.335898 1297065 node_ready.go:35] waiting up to 6m0s for node "functional-562018" to be "Ready" ...
	I1213 14:49:37.335934 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:37.336074 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.336106 1297065 retry.go:31] will retry after 199.574589ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.336165 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:37.336178 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.336184 1297065 retry.go:31] will retry after 285.216803ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.336272 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:37.336359 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:37.336679 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:37.536000 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:37.591050 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:37.594766 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.594797 1297065 retry.go:31] will retry after 489.410948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.621926 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:37.677113 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:37.681307 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.681342 1297065 retry.go:31] will retry after 401.770697ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.836587 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:37.836683 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:37.837004 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:38.083592 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:38.085139 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:38.190416 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:38.194296 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.194326 1297065 retry.go:31] will retry after 757.686696ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.207705 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:38.207792 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.207830 1297065 retry.go:31] will retry after 505.194475ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.337091 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:38.337186 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:38.337548 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:38.714015 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:38.783498 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:38.783559 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.783593 1297065 retry.go:31] will retry after 988.219406ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.836722 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:38.836873 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:38.837238 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:38.952600 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:39.020705 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:39.020749 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:39.020768 1297065 retry.go:31] will retry after 1.072702638s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:39.337164 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:39.337235 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:39.337545 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:39.337593 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:39.772102 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:39.836685 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:39.836850 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:39.837201 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:39.843566 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:39.843633 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:39.843675 1297065 retry.go:31] will retry after 1.296209829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:40.093780 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:40.156222 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:40.156329 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:40.156372 1297065 retry.go:31] will retry after 965.768616ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:40.336552 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:40.336651 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:40.336970 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:40.836822 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:40.836895 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:40.837217 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:41.122779 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:41.140323 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:41.215097 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:41.215182 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:41.215214 1297065 retry.go:31] will retry after 2.369565148s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:41.219568 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:41.219636 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:41.219656 1297065 retry.go:31] will retry after 2.455142313s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:41.336947 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:41.337019 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:41.337416 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:41.837047 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:41.837124 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:41.837388 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:41.837438 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:42.337111 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:42.337201 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:42.337621 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:42.836363 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:42.836444 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:42.836803 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:43.336226 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:43.336304 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:43.336552 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:43.585084 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:43.645189 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:43.649081 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:43.649137 1297065 retry.go:31] will retry after 3.995275361s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:43.675423 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:43.738811 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:43.738856 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:43.738876 1297065 retry.go:31] will retry after 3.319355388s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:43.837038 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:43.837127 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:43.837467 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:43.837521 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:44.336215 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:44.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:44.336669 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:44.836348 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:44.836422 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:44.836715 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:45.336277 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:45.336359 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:45.336715 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:45.836385 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:45.836482 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:45.836839 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:46.336842 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:46.336917 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:46.337174 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:46.337224 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:46.836567 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:46.836641 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:46.837050 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:47.058405 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:47.140540 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:47.144585 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:47.144615 1297065 retry.go:31] will retry after 3.814662677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:47.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:47.336338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:47.336673 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:47.645178 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:47.704569 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:47.708191 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:47.708226 1297065 retry.go:31] will retry after 4.571128182s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:47.836452 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:47.836522 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:47.836787 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:48.336260 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:48.336350 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:48.336628 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:48.836239 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:48.836318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:48.836676 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:48.836743 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:49.336220 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:49.336290 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:49.336531 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:49.836286 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:49.836362 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:49.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:50.336380 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:50.336455 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:50.336799 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:50.836292 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:50.836366 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:50.836651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:50.960127 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:51.026705 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:51.026749 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:51.026767 1297065 retry.go:31] will retry after 9.152833031s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:51.336157 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:51.336252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:51.336592 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:51.336645 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:51.836328 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:51.836439 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:51.836752 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:52.280634 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:52.336265 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:52.336348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:52.336649 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:52.351151 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:52.351191 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:52.351210 1297065 retry.go:31] will retry after 6.806315756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:52.837084 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:52.837176 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:52.837503 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:53.336231 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:53.336347 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:53.336679 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:53.336735 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:53.836278 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:53.836358 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:53.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:54.336375 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:54.336453 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:54.336787 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:54.836534 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:54.836609 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:54.836960 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:55.336530 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:55.336608 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:55.336965 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:55.337034 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:55.836817 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:55.836889 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:55.837215 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:56.337019 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:56.337095 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:56.337433 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:56.836157 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:56.836242 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:56.836511 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:57.336450 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:57.336529 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:57.336899 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:57.836240 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:57.836331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:57.836629 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:57.836681 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:58.336184 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:58.336276 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:58.336593 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:58.836286 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:58.836386 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:58.836728 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:59.158224 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:59.216557 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:59.216609 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:59.216627 1297065 retry.go:31] will retry after 13.782587086s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:59.336899 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:59.336976 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:59.337309 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:59.837050 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:59.837117 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:59.837393 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:59.837436 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:00.179978 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:50:00.336210 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:00.336301 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:00.337482 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 14:50:00.358964 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:00.359008 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:00.359030 1297065 retry.go:31] will retry after 12.357990487s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:00.836789 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:00.836882 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:00.837203 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:01.336532 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:01.336609 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:01.336921 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:01.836255 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:01.836341 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:01.836652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:02.336512 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:02.336592 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:02.336956 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:02.337013 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:02.836530 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:02.836611 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:02.836888 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:03.336279 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:03.336358 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:03.336701 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:03.836401 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:03.836474 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:03.836845 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:04.336380 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:04.336454 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:04.336784 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:04.836248 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:04.836328 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:04.836660 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:04.836716 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:05.336407 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:05.336490 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:05.336806 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:05.836467 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:05.836548 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:05.836865 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:06.336870 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:06.336953 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:06.337350 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:06.837024 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:06.837097 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:06.837419 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:06.837478 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:07.336416 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:07.336488 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:07.336747 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:07.836490 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:07.836567 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:07.836906 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:08.336625 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:08.336699 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:08.337020 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:08.836518 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:08.836588 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:08.836865 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:09.336612 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:09.336692 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:09.337049 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:09.337109 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:09.836858 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:09.836939 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:09.837272 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:10.337051 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:10.337125 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:10.337387 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:10.837153 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:10.837234 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:10.837582 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:11.336183 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:11.336261 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:11.336582 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:11.836188 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:11.836256 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:11.836517 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:11.836567 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:12.336515 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:12.336595 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:12.336916 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:12.717305 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:50:12.775348 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:12.775393 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:12.775414 1297065 retry.go:31] will retry after 16.474515121s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:12.836567 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:12.836650 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:12.837019 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:13.000372 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:50:13.059399 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:13.063613 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:13.063652 1297065 retry.go:31] will retry after 8.071550656s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:13.336122 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:13.336199 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:13.336467 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:13.836136 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:13.836218 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:13.836591 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:13.836660 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:14.336363 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:14.336438 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:14.336774 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:14.836188 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:14.836261 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:14.836540 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:15.336277 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:15.336353 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:15.336677 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:15.836219 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:15.836293 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:15.836588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:16.336546 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:16.336617 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:16.336864 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:16.336904 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:16.836265 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:16.836348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:16.836648 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:17.336586 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:17.336661 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:17.337008 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:17.836520 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:17.836585 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:17.836858 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:18.336257 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:18.336354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:18.336715 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:18.836428 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:18.836502 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:18.836842 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:18.836899 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:19.336242 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:19.336311 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:19.336563 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:19.836231 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:19.836306 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:19.836619 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:20.336334 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:20.336416 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:20.336797 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:20.836189 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:20.836268 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:20.836618 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:21.136217 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:50:21.193283 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:21.196963 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:21.196996 1297065 retry.go:31] will retry after 15.530830741s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:21.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:21.336329 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:21.336627 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:21.336677 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:21.836352 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:21.836433 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:21.836751 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:22.336543 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:22.336615 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:22.336948 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:22.836275 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:22.836348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:22.836696 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:23.336400 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:23.336482 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:23.336828 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:23.336887 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:23.836198 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:23.836296 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:23.836661 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:24.336254 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:24.336329 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:24.336641 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:24.836327 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:24.836403 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:24.836743 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:25.336278 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:25.336354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:25.336703 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:25.836226 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:25.836309 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:25.836679 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:25.836740 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:26.337200 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:26.337293 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:26.337628 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:26.836405 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:26.836480 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:26.836777 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:27.336562 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:27.336653 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:27.337005 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:27.836230 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:27.836307 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:27.836657 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:28.336177 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:28.336267 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:28.336587 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:28.336638 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:28.836250 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:28.836332 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:28.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:29.250199 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:50:29.308318 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:29.311716 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:29.311747 1297065 retry.go:31] will retry after 30.463725654s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:29.336999 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:29.337080 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:29.337458 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:29.836155 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:29.836222 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:29.836520 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:30.336243 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:30.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:30.336620 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:30.336669 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:30.836228 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:30.836325 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:30.836623 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:31.336183 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:31.336285 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:31.336588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:31.836269 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:31.836340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:31.836687 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:32.336490 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:32.336568 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:32.336902 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:32.336957 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:32.836192 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:32.836262 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:32.836598 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:33.336250 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:33.336349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:33.336707 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:33.836245 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:33.836323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:33.836664 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:34.336184 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:34.336253 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:34.336535 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:34.836284 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:34.836360 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:34.836791 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:34.836848 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:35.336516 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:35.336596 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:35.336938 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:35.836527 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:35.836598 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:35.836864 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:36.336942 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:36.337020 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:36.337342 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:36.728993 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:50:36.785078 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:36.788836 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:36.788868 1297065 retry.go:31] will retry after 31.693829046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:36.837069 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:36.837145 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:36.837461 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:36.837513 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:37.336193 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:37.336260 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:37.336549 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:37.836234 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:37.836312 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:37.836628 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:38.336250 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:38.336329 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:38.336672 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:38.836242 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:38.836323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:38.836588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:39.336229 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:39.336304 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:39.336650 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:39.336709 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:39.836236 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:39.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:39.836618 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:40.336280 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:40.336355 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:40.336614 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:40.836272 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:40.836382 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:40.836789 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:41.336524 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:41.336601 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:41.336927 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:41.336987 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:41.836201 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:41.836278 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:41.836626 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:42.336633 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:42.336716 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:42.337072 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:42.836881 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:42.836955 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:42.837306 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:43.337071 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:43.337144 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:43.337415 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:43.337468 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:43.836983 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:43.837056 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:43.837412 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:44.336153 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:44.336229 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:44.336573 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:44.836267 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:44.836356 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:44.836695 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:45.336450 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:45.336538 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:45.336949 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:45.836752 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:45.836829 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:45.837178 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:45.837235 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:46.336981 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:46.337060 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:46.337351 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:46.836213 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:46.836319 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:46.836733 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:47.336523 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:47.336680 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:47.336969 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:47.836511 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:47.836579 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:47.836844 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:48.336215 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:48.336310 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:48.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:48.336704 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:48.836371 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:48.836487 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:48.836832 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:49.336188 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:49.336255 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:49.336544 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:49.836263 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:49.836365 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:49.836653 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:50.336392 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:50.336468 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:50.336808 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:50.336866 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:50.836325 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:50.836439 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:50.836713 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:51.336252 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:51.336346 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:51.336691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:51.836280 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:51.836353 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:51.836671 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:52.336550 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:52.336630 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:52.336893 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:52.336943 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:52.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:52.836322 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:52.836667 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:53.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:53.336342 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:53.336699 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:53.836191 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:53.836264 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:53.836543 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:54.336246 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:54.336345 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:54.336700 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:54.836402 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:54.836475 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:54.836811 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:54.836869 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:55.336360 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:55.336437 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:55.336727 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:55.836432 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:55.836512 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:55.836850 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:56.337034 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:56.337132 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:56.337451 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:56.836142 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:56.836214 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:56.836473 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:57.336469 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:57.336554 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:57.336893 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:57.336949 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:57.836297 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:57.836381 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:57.836714 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:58.336385 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:58.336465 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:58.336743 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:58.836460 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:58.836541 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:58.836889 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:59.336265 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:59.336340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:59.336697 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:59.776318 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:50:59.836157 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:59.836232 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:59.836466 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:59.836509 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:59.839555 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:59.839592 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:59.839611 1297065 retry.go:31] will retry after 31.022889465s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:51:00.336284 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:00.336385 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:00.337017 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:00.836870 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:00.836951 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:00.837274 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:01.337018 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:01.337093 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:01.337377 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:01.836106 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:01.836178 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:01.836532 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:01.836591 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:02.336582 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:02.336658 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:02.336989 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:02.836525 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:02.836602 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:02.836897 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:03.336270 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:03.336349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:03.336732 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:03.836448 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:03.836526 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:03.836864 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:03.836920 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:04.336555 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:04.336630 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:04.336888 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:04.836543 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:04.836644 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:04.836971 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:05.336771 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:05.336847 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:05.337186 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:05.836528 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:05.836603 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:05.836858 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:06.336901 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:06.336978 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:06.337275 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:06.337322 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:06.836616 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:06.836698 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:06.837028 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:07.336511 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:07.336579 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:07.336834 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:07.836242 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:07.836317 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:07.836644 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:08.336229 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:08.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:08.336668 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:08.482933 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:51:08.546772 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:51:08.546820 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:51:08.546914 1297065 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 14:51:08.836114 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:08.836184 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:08.836454 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:08.836495 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:09.336176 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:09.336252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:09.336597 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:09.836346 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:09.836424 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:09.836727 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:10.336174 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:10.336265 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:10.336548 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:10.836166 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:10.836272 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:10.836571 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:10.836621 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:11.336180 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:11.336265 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:11.336582 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:11.836217 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:11.836300 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:11.836626 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:12.336568 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:12.336663 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:12.336970 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:12.836801 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:12.836879 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:12.837247 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:12.837301 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:13.336980 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:13.337062 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:13.337320 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:13.837125 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:13.837211 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:13.837540 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:14.336301 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:14.336390 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:14.336757 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:14.836166 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:14.836241 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:14.836499 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:15.336228 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:15.336300 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:15.336648 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:15.336706 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:15.836385 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:15.836461 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:15.836787 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:16.336816 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:16.336889 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:16.337169 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:16.836948 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:16.837028 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:16.837350 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:17.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:17.336319 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:17.336641 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:17.836172 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:17.836252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:17.836555 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:17.836606 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:18.336236 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:18.336313 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:18.336641 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:18.836373 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:18.836452 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:18.836760 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:19.336167 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:19.336238 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:19.336538 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:19.836221 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:19.836297 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:19.836617 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:19.836675 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:20.336339 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:20.336412 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:20.336771 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:20.836180 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:20.836251 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:20.836567 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:21.336259 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:21.336338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:21.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:21.836380 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:21.836462 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:21.836800 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:21.836855 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:22.336522 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:22.336591 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:22.336867 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:22.836547 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:22.836626 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:22.836957 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:23.336750 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:23.336825 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:23.337141 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:23.836507 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:23.836582 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:23.836841 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:23.836883 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:24.336607 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:24.336681 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:24.337016 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:24.836840 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:24.836916 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:24.837240 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:25.336547 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:25.336619 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:25.336933 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:25.836630 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:25.836712 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:25.837049 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:25.837104 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:26.337004 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:26.337079 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:26.337406 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:26.836128 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:26.836203 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:26.836467 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:27.336400 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:27.336476 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:27.336833 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:27.836240 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:27.836325 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:27.836680 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:28.336379 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:28.336452 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:28.336710 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:28.336750 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:28.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:28.836333 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:28.836661 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:29.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:29.336331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:29.336705 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:29.836347 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:29.836422 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:29.836690 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:30.336280 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:30.336351 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:30.336706 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:30.836402 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:30.836499 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:30.836836 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:30.836891 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:30.863046 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:51:30.922204 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:51:30.922247 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:51:30.922363 1297065 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 14:51:30.925463 1297065 out.go:179] * Enabled addons: 
	I1213 14:51:30.929007 1297065 addons.go:530] duration metric: took 1m54.593151344s for enable addons: enabled=[]
	I1213 14:51:31.336478 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:31.336574 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:31.336911 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:31.836663 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:31.836742 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:31.837400 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:32.336285 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:32.336362 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:32.337832 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 14:51:32.836218 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:32.836313 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:32.836623 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:33.336237 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:33.336311 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:33.336634 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:33.336688 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:33.836221 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:33.836296 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:33.836630 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:34.336182 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:34.336256 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:34.336569 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:34.836237 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:34.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:34.836646 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:35.336246 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:35.336327 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:35.336677 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:35.336739 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:35.836381 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:35.836450 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:35.836754 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:36.336847 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:36.336928 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:36.337255 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:36.836533 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:36.836613 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:36.836939 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:37.336507 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:37.336573 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:37.336839 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:37.336879 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:37.836228 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:37.836301 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:37.836594 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:38.336263 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:38.336341 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:38.336675 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:38.836200 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:38.836285 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:38.836811 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:39.336276 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:39.336373 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:39.336728 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:39.836259 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:39.836349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:39.836684 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:39.836742 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:40.336215 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:40.336295 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:40.336618 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:40.836435 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:40.836524 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:40.836905 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:41.336775 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:41.336851 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:41.337175 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:41.836559 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:41.836631 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:41.836894 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:41.836936 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:42.336658 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:42.336748 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:42.337128 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:42.836909 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:42.836987 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:42.837289 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:43.337040 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:43.337127 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:43.337474 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:43.836192 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:43.836275 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:43.836624 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:44.336291 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:44.336388 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:44.336784 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:44.336841 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:44.836180 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:44.836254 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:44.836551 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:45.336321 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:45.336400 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:45.336690 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:45.836435 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:45.836510 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:45.836833 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:46.336779 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:46.336848 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:46.337141 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:46.337201 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:46.836518 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:46.836596 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:46.836935 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:47.336893 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:47.336971 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:47.337308 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:47.836529 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:47.836614 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:47.836876 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:48.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:48.336337 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:48.336692 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:48.836415 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:48.836494 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:48.836834 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:48.836892 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:49.336543 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:49.336621 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:49.336897 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:49.836323 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:49.836400 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:49.836703 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:50.336284 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:50.336361 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:50.336695 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:50.836346 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:50.836424 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:50.836742 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:51.336225 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:51.336303 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:51.336650 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:51.336709 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:51.836373 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:51.836452 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:51.836792 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:52.336469 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:52.336538 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:52.336793 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:52.836252 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:52.836345 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:52.836678 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:53.336269 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:53.336349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:53.336675 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:53.336740 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:53.836126 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:53.836205 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:53.836462 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:54.336204 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:54.336277 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:54.336594 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:54.836224 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:54.836301 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:54.836659 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:55.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:55.336331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:55.336616 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:55.836312 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:55.836389 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:55.836728 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:55.836782 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:56.336654 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:56.336732 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:56.337071 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:56.836525 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:56.836605 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:56.836887 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:57.336719 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:57.336796 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:57.337143 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:57.836841 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:57.836920 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:57.837247 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:57.837302 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:58.337040 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:58.337110 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:58.337364 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:58.837119 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:58.837198 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:58.837538 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:59.336279 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:59.336367 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:59.336734 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:59.836438 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:59.836511 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:59.836774 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:00.355395 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:00.355523 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:00.355852 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:00.355945 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:00.836731 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:00.836813 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:00.837145 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:01.336514 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:01.336595 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:01.336898 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:01.836757 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:01.836832 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:01.837174 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:02.336946 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:02.337023 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:02.337363 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:02.836523 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:02.836599 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:02.836906 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:02.836965 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:03.336199 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:03.336271 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:03.336598 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:03.836313 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:03.836395 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:03.836725 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:04.336141 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:04.336218 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:04.336472 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:04.836200 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:04.836276 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:04.836590 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:05.336247 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:05.336324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:05.336654 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:05.336712 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:05.836185 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:05.836257 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:05.836570 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:06.336596 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:06.336670 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:06.337028 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:06.836851 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:06.836932 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:06.837278 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:07.337039 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:07.337104 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:07.337364 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:07.337404 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:07.837188 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:07.837264 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:07.837630 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:08.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:08.336324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:08.336644 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:08.836195 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:08.836269 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:08.836588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:09.336284 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:09.336374 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:09.336691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:09.836403 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:09.836488 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:09.836831 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:09.836885 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:10.336187 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:10.336264 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:10.336588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:10.836238 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:10.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:10.836652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:11.336250 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:11.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:11.336683 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:11.836362 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:11.836437 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:11.836693 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:12.336616 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:12.336691 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:12.337039 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:12.337098 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:12.836854 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:12.836931 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:12.837269 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:13.337012 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:13.337077 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:13.337331 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:13.837136 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:13.837214 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:13.837562 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:14.336275 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:14.336353 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:14.336653 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:14.836184 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:14.836257 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:14.836550 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:14.836598 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:15.336241 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:15.336321 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:15.336652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:15.836388 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:15.836477 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:15.836811 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:16.336837 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:16.336907 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:16.337175 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:16.836969 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:16.837065 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:16.837433 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:16.837491 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:17.336248 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:17.336323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:17.336684 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:17.836232 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:17.836298 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:17.836601 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:18.336212 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:18.336289 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:18.336616 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:18.836403 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:18.836489 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:18.836838 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:19.336195 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:19.336269 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:19.336594 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:19.336650 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:19.836346 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:19.836429 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:19.836796 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:20.336507 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:20.336589 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:20.336899 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:20.836181 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:20.836258 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:20.836517 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:21.336228 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:21.336302 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:21.336632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:21.336692 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:21.836262 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:21.836338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:21.836657 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:22.336526 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:22.336604 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:22.336882 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:22.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:22.836317 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:22.836632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:23.336264 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:23.336373 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:23.336709 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:23.336768 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:23.836406 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:23.836477 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:23.836791 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:24.336241 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:24.336336 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:24.336674 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:24.836391 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:24.836474 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:24.836800 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:25.336188 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:25.336256 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:25.336595 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:25.836360 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:25.836444 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:25.836782 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:25.836842 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:26.336659 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:26.336742 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:26.337133 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:26.836533 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:26.836602 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:26.836915 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:27.336718 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:27.336789 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:27.337149 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:27.836949 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:27.837024 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:27.837383 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:27.837440 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:28.337164 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:28.337233 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:28.337486 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:28.836203 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:28.836284 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:28.836624 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:29.336359 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:29.336444 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:29.336786 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:29.836473 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:29.836542 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:29.836800 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:30.336266 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:30.336346 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:30.336712 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:30.336778 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:30.836448 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:30.836530 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:30.836895 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:31.336594 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:31.336667 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:31.336934 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:31.836251 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:31.836334 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:31.836670 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:32.336445 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:32.336545 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:32.336826 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:32.336874 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:32.836183 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:32.836254 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:32.836608 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:33.336221 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:33.336296 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:33.336654 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:33.836240 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:33.836319 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:33.836658 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:34.336330 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:34.336399 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:34.336664 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:34.836346 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:34.836426 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:34.836772 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:34.836831 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:35.336328 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:35.336410 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:35.336774 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:35.836188 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:35.836261 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:35.836582 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:36.336650 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:36.336733 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:36.337068 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:36.836880 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:36.836955 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:36.837277 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:36.837337 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:37.336192 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:37.336266 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:37.336525 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:37.836226 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:37.836309 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:37.836638 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:38.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:38.336322 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:38.336672 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:38.836202 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:38.836273 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:38.836547 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:39.336280 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:39.336358 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:39.336701 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:39.336754 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:39.836426 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:39.836508 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:39.836821 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:40.336191 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:40.336263 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:40.336564 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:40.836260 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:40.836361 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:40.836721 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:41.336424 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:41.336505 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:41.336831 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:41.336888 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:41.836209 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:41.836299 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:41.836584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:42.336696 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:42.336785 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:42.337191 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:42.836996 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:42.837071 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:42.837403 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:43.336118 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:43.336196 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:43.336449 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:43.836158 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:43.836243 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:43.836549 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:43.836602 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:44.336242 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:44.336324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:44.336613 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:44.836191 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:44.836266 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:44.836521 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:45.336296 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:45.336373 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:45.336712 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:45.836269 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:45.836353 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:45.836712 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:45.836772 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:46.336576 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:46.336646 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:46.336952 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:46.836262 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:46.836354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:46.836668 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:47.336578 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:47.336658 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:47.336990 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:47.836518 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:47.836598 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:47.836865 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:47.836918 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:48.336636 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:48.336714 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:48.337035 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:48.836837 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:48.836909 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:48.837235 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:49.336532 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:49.336609 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:49.336874 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:49.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:49.836320 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:49.836663 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:50.336257 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:50.336343 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:50.336683 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:50.336737 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:50.836244 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:50.836318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:50.836588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:51.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:51.336340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:51.336658 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:51.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:51.836323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:51.836652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:52.336454 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:52.336534 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:52.336820 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:52.336867 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:52.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:52.836323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:52.836674 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:53.336385 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:53.336470 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:53.336787 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:53.836193 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:53.836269 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:53.836583 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:54.336266 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:54.336348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:54.336708 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:54.836271 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:54.836348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:54.836719 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:54.836775 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:55.336414 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:55.336481 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:55.336738 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:55.836424 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:55.836502 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:55.836840 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:56.336926 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:56.337006 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:56.337393 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:56.837161 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:56.837240 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:56.837514 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:56.837556 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:57.336486 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:57.336562 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:57.336904 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:57.836247 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:57.836326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:57.836646 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:58.336169 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:58.336261 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:58.336585 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:58.836253 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:58.836338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:58.836686 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:59.336405 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:59.336488 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:59.336818 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:59.336881 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:59.836205 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:59.836279 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:59.836602 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:00.336348 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:00.336434 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:00.336755 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:00.836458 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:00.836538 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:00.836919 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:01.336481 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:01.336559 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:01.336870 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:01.336917 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:01.836269 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:01.836349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:01.836651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:02.336510 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:02.336585 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:02.336875 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:02.836559 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:02.836633 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:02.836887 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:03.336237 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:03.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:03.336652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:03.836262 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:03.836343 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:03.836681 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:03.836743 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:04.336178 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:04.336263 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:04.336579 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:04.836312 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:04.836395 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:04.836768 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:05.336328 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:05.336405 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:05.336722 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:05.836169 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:05.836249 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:05.836532 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:06.337061 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:06.337133 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:06.337448 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:06.337510 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:06.836170 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:06.836312 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:06.836648 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:07.336505 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:07.336579 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:07.336834 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:07.836162 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:07.836243 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:07.836604 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:08.336414 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:08.336492 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:08.336833 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:08.836389 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:08.836459 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:08.836768 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:08.836825 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:09.336249 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:09.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:09.336671 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:09.836385 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:09.836463 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:09.836810 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:10.336515 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:10.336589 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:10.336857 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:10.836237 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:10.836310 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:10.836668 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:11.336409 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:11.336502 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:11.336898 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:11.336954 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:11.836193 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:11.836273 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:11.836553 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:12.336497 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:12.336582 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:12.336916 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:12.836273 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:12.836346 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:12.836677 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:13.336363 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:13.336435 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:13.336727 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:13.836260 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:13.836336 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:13.836645 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:13.836693 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:14.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:14.336320 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:14.336647 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:14.836180 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:14.836273 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:14.836579 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:15.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:15.336342 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:15.336712 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:15.836446 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:15.836528 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:15.836857 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:15.836911 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:16.336886 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:16.336953 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:16.337211 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:16.836970 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:16.837043 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:16.837375 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:17.336898 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:17.336971 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:17.337298 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:17.837031 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:17.837110 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:17.837392 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:17.837435 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:18.336966 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:18.337049 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:18.337376 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:18.837166 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:18.837253 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:18.837689 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:19.336193 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:19.336283 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:19.336617 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:19.836252 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:19.836332 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:19.836666 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:20.336399 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:20.336476 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:20.336824 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:20.336877 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:20.836528 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:20.836607 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:20.836879 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:21.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:21.336319 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:21.336627 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:21.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:21.836331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:21.836682 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:22.336425 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:22.336492 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:22.336751 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:22.836221 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:22.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:22.836646 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:22.836701 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:23.336413 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:23.336491 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:23.336832 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:23.836195 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:23.836282 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:23.836590 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:24.336251 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:24.336331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:24.336671 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:24.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:24.836325 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:24.836645 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:25.336331 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:25.336403 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:25.336743 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:25.336792 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:25.836245 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:25.836324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:25.836644 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:26.336605 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:26.336680 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:26.337038 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:26.836509 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:26.836578 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:26.836824 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:27.336452 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:27.336525 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:27.336887 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:27.336942 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:27.836486 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:27.836568 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:27.836917 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:28.336112 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:28.336186 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:28.336563 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:28.836282 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:28.836357 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:28.836713 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:29.336309 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:29.336384 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:29.336723 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:29.836403 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:29.836478 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:29.836733 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:29.836776 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:30.336220 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:30.336298 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:30.336637 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:30.836357 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:30.836431 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:30.836763 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:31.336439 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:31.336532 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:31.336820 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:31.836503 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:31.836582 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:31.836898 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:31.836954 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:32.336893 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:32.336969 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:32.337280 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:32.837017 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:32.837102 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:32.837392 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:33.336206 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:33.336289 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:33.336624 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:33.836226 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:33.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:33.836661 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:34.336143 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:34.336223 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:34.336515 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:34.336566 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:34.836223 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:34.836309 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:34.836660 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:35.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:35.336337 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:35.336768 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:35.836351 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:35.836427 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:35.836683 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:36.336777 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:36.336851 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:36.337168 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:36.337222 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:36.837003 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:36.837084 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:36.837449 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:37.336375 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:37.336445 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:37.336691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:37.836402 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:37.836479 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:37.836826 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:38.336440 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:38.336525 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:38.336860 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:38.836259 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:38.836339 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:38.836606 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:38.836659 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:39.336506 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:39.336596 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:39.337235 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:39.836335 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:39.836421 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:39.836794 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:40.336522 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:40.336587 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:40.336888 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:40.836592 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:40.836674 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:40.837021 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:40.837076 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:41.336578 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:41.336655 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:41.336975 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:41.836525 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:41.836604 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:41.836959 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:42.336767 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:42.336851 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:42.337172 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:42.836977 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:42.837055 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:42.837406 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:42.837463 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:43.336096 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:43.336165 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:43.336522 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:43.836216 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:43.836315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:43.836677 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:44.336275 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:44.336366 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:44.336718 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:44.836181 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:44.836246 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:44.836531 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:45.336294 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:45.336384 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:45.336759 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:45.336815 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:45.836495 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:45.836571 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:45.836902 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:46.336923 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:46.336991 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:46.337248 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:46.836581 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:46.836658 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:46.836955 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:47.336876 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:47.336959 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:47.337291 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:47.337349 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:47.837127 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:47.837195 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:47.837512 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:48.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:48.336347 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:48.336704 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:48.836213 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:48.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:48.836647 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:49.336183 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:49.336258 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:49.336584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:49.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:49.836330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:49.836652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:49.836707 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:50.336396 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:50.336475 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:50.336802 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:50.836181 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:50.836252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:50.836524 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:51.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:51.336340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:51.336661 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:51.836254 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:51.836339 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:51.836673 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:51.836731 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:52.336439 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:52.336508 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:52.336813 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:52.836552 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:52.836646 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:52.837037 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:53.336867 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:53.336943 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:53.337248 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:53.836529 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:53.836600 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:53.836882 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:53.836925 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:54.336730 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:54.336804 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:54.337142 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:54.836954 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:54.837030 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:54.837375 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:55.337104 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:55.337186 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:55.337475 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:55.836190 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:55.836268 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:55.836616 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:56.336432 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:56.336515 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:56.336847 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:56.336900 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:56.836181 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:56.836260 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:56.836553 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:57.336500 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:57.336575 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:57.336934 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:57.836737 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:57.836827 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:57.837184 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:58.336550 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:58.336646 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:58.336966 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:58.337018 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:58.836741 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:58.836828 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:58.837162 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:59.336945 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:59.337026 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:59.337378 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:59.836973 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:59.837043 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:59.837302 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:00.337185 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:00.337285 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:00.337926 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:00.338025 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:00.836242 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:00.836326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:00.836691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:01.336224 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:01.336316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:01.336589 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:01.836224 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:01.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:01.836636 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:02.336530 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:02.336607 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:02.336904 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:02.836600 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:02.836677 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:02.837015 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:02.837082 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:03.336835 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:03.336910 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:03.337276 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:03.837094 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:03.837170 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:03.837522 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:04.336212 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:04.336283 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:04.336559 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:04.836246 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:04.836354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:04.836699 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:05.336254 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:05.336341 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:05.336691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:05.336745 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:05.836209 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:05.836288 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:05.836622 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:06.336695 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:06.336783 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:06.337108 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:06.836892 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:06.836966 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:06.837308 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:07.336123 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:07.336192 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:07.336465 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:07.836757 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:07.836832 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:07.837160 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:07.837217 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:08.336959 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:08.337035 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:08.337354 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:08.837047 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:08.837117 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:08.837375 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:09.336797 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:09.336876 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:09.337176 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:09.836976 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:09.837060 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:09.837357 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:09.837405 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:10.337145 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:10.337219 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:10.337522 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:10.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:10.836324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:10.836624 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:11.336249 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:11.336335 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:11.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:11.836251 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:11.836329 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:11.836584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:12.336557 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:12.336629 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:12.336964 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:12.337021 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:12.836792 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:12.836867 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:12.837180 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:13.336499 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:13.336580 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:13.336912 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:13.836538 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:13.836617 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:13.836932 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:14.336207 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:14.336299 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:14.336627 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:14.836329 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:14.836410 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:14.836729 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:14.836786 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:15.336238 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:15.336371 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:15.336658 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:15.836347 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:15.836425 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:15.836765 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:16.336570 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:16.336641 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:16.336896 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:16.836234 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:16.836313 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:16.836636 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:17.336499 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:17.336578 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:17.336890 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:17.336950 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:17.836161 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:17.836245 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:17.836561 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:18.336250 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:18.336345 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:18.336673 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:18.836422 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:18.836499 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:18.836856 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:19.336539 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:19.336609 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:19.336871 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:19.836236 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:19.836313 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:19.836657 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:19.836712 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:20.336398 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:20.336479 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:20.336829 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:20.836244 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:20.836336 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:20.836642 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:21.336253 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:21.336338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:21.336707 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:21.836309 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:21.836398 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:21.836758 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:21.836814 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:22.336543 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:22.336624 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:22.336925 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:22.836625 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:22.836707 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:22.837057 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:23.336641 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:23.336724 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:23.337073 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:23.836556 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:23.836634 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:23.836903 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:23.836945 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:24.336275 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:24.336357 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:24.336645 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:24.836238 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:24.836349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:24.836732 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:25.336455 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:25.336529 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:25.336850 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:25.836272 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:25.836354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:25.836679 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:26.336762 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:26.336843 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:26.337194 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:26.337248 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:26.836530 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:26.836634 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:26.836949 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:27.337082 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:27.337168 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:27.337523 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:27.836265 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:27.836347 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:27.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:28.336192 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:28.336265 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:28.336562 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:28.836214 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:28.836315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:28.836676 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:28.836744 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:29.336414 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:29.336563 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:29.336947 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:29.836198 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:29.836288 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:29.836614 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:30.336242 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:30.336318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:30.336656 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:30.836210 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:30.836286 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:30.836612 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:31.336280 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:31.336362 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:31.336639 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:31.336684 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:31.836265 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:31.836343 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:31.836692 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:32.336488 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:32.336567 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:32.336863 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:32.836173 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:32.836265 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:32.836578 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:33.336248 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:33.336322 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:33.336687 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:33.336748 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:33.836273 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:33.836362 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:33.836704 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:34.336406 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:34.336478 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:34.336748 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:34.836467 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:34.836551 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:34.836857 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:35.336588 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:35.336668 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:35.337027 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:35.337086 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:35.836514 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:35.836585 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:35.836913 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:36.336967 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:36.337041 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:36.337376 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:36.837202 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:36.837285 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:36.837584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:37.336502 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:37.336591 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:37.336896 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:37.836591 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:37.836694 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:37.837046 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:37.837115 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:38.336899 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:38.336971 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:38.337328 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:38.837050 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:38.837126 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:38.837404 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:39.336160 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:39.336232 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:39.336580 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:39.836289 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:39.836382 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:39.836703 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:40.336371 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:40.336443 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:40.336707 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:40.336759 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:40.836239 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:40.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:40.836655 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:41.336240 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:41.336320 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:41.336686 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:41.836231 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:41.836305 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:41.836611 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:42.336623 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:42.336717 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:42.337080 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:42.337132 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:42.836776 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:42.836862 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:42.837178 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:43.336512 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:43.336586 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:43.336846 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:43.836233 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:43.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:43.836685 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:44.336266 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:44.336339 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:44.336641 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:44.836273 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:44.836339 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:44.836623 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:44.836676 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:45.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:45.336326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:45.336676 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:45.836517 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:45.836597 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:45.836920 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:46.336894 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:46.336967 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:46.337224 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:46.837014 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:46.837094 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:46.837437 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:46.837490 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:47.336253 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:47.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:47.336670 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:47.836235 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:47.836302 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:47.836610 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:48.336235 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:48.336318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:48.336671 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:48.836257 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:48.836337 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:48.836678 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:49.336349 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:49.336431 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:49.336732 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:49.336821 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:49.836214 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:49.836293 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:49.836634 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:50.336226 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:50.336307 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:50.336635 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:50.836333 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:50.836410 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:50.836688 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:51.336241 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:51.336326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:51.336678 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:51.836396 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:51.836479 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:51.836771 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:51.836817 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:52.336516 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:52.336593 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:52.336852 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:52.836252 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:52.836340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:52.836773 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:53.336515 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:53.336595 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:53.336935 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:53.836510 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:53.836587 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:53.836851 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:53.836896 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:54.336367 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:54.336467 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:54.336808 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:54.836267 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:54.836343 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:54.836686 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:55.336171 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:55.336242 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:55.336562 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:55.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:55.836320 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:55.836689 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:56.336624 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:56.336725 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:56.337092 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:56.337153 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:56.836464 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:56.836539 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:56.836794 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:57.336513 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:57.336595 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:57.336897 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:57.836221 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:57.836300 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:57.836664 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:58.336100 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:58.336175 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:58.336496 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:58.836220 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:58.836305 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:58.836646 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:58.836706 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:59.336458 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:59.336535 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:59.336905 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:59.836288 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:59.836366 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:59.836722 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:00.336435 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:00.336516 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:00.336842 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:00.836803 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:00.836881 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:00.837232 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:00.837290 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:01.336546 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:01.336620 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:01.336919 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:01.836631 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:01.836717 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:01.837061 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:02.336921 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:02.337000 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:02.337379 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:02.837188 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:02.837257 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:02.837522 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:02.837565 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:03.336219 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:03.336301 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:03.336654 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:03.836248 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:03.836325 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:03.836635 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:04.336178 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:04.336251 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:04.336567 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:04.836232 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:04.836318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:04.836669 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:05.336242 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:05.336317 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:05.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:05.336713 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:05.836366 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:05.836448 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:05.836735 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:06.336637 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:06.336720 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:06.337074 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:06.836743 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:06.836817 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:06.837172 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:07.336998 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:07.337074 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:07.337343 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:07.337395 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:07.837167 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:07.837242 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:07.837584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:08.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:08.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:08.336647 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:08.836178 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:08.836256 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:08.836598 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:09.336224 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:09.336297 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:09.336632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:09.836244 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:09.836321 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:09.836675 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:09.836731 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:10.336173 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:10.336248 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:10.336521 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:10.836203 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:10.836283 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:10.836642 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:11.336345 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:11.336437 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:11.336802 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:11.836493 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:11.836567 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:11.836846 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:11.836897 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:12.336745 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:12.336822 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:12.337164 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:12.836822 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:12.836903 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:12.837329 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:13.337068 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:13.337137 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:13.337477 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:13.836207 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:13.836286 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:13.836668 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:14.336241 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:14.336327 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:14.336632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:14.336679 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:14.836300 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:14.836375 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:14.836649 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:15.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:15.336332 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:15.336669 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:15.836251 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:15.836333 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:15.836683 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:16.336651 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:16.336729 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:16.337093 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:16.337145 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:16.836909 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:16.836992 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:16.837356 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:17.336137 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:17.336212 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:17.336571 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:17.836247 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:17.836315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:17.836610 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:18.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:18.336326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:18.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:18.836230 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:18.836305 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:18.836647 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:18.836705 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:19.336364 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:19.336454 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:19.336797 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:19.836215 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:19.836288 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:19.836625 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:20.336325 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:20.336403 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:20.336754 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:20.836274 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:20.836352 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:20.836687 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:20.836744 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:21.336273 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:21.336350 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:21.336700 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:21.836398 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:21.836482 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:21.836816 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:22.336510 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:22.336583 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:22.336841 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:22.836211 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:22.836292 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:22.836650 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:23.336238 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:23.336314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:23.336696 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:23.336754 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:23.836429 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:23.836502 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:23.836789 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:24.336496 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:24.336580 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:24.336961 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:24.836574 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:24.836650 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:24.836988 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:25.336500 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:25.336566 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:25.336817 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:25.336861 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:25.836628 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:25.836709 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:25.837047 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:26.337039 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:26.337121 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:26.337470 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:26.836162 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:26.836244 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:26.836581 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:27.336591 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:27.336672 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:27.337011 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:27.337065 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:27.836601 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:27.836681 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:27.837000 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:28.336523 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:28.336604 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:28.336874 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:28.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:28.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:28.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:29.336385 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:29.336497 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:29.336839 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:29.836183 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:29.836257 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:29.836558 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:29.836608 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:30.336289 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:30.336367 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:30.336681 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:30.836223 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:30.836310 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:30.836632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:31.336179 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:31.336247 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:31.336520 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:31.836214 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:31.836288 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:31.836631 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:31.836685 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:32.336406 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:32.336490 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:32.336839 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:32.836185 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:32.836252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:32.836552 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:33.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:33.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:33.336778 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:33.836367 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:33.836492 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:33.836841 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:33.836899 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:34.336602 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:34.336672 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:34.336962 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:34.836466 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:34.836542 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:34.836843 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:35.336251 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:35.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:35.336690 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:35.836215 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:35.836283 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:35.836600 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:36.336641 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:36.336716 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:36.337095 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:36.337155 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:36.836776 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:36.836857 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:36.837203 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:37.337030 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:37.337151 1297065 node_ready.go:38] duration metric: took 6m0.001157945s for node "functional-562018" to be "Ready" ...
	I1213 14:55:37.340291 1297065 out.go:203] 
	W1213 14:55:37.343143 1297065 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 14:55:37.343162 1297065 out.go:285] * 
	W1213 14:55:37.345311 1297065 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 14:55:37.348302 1297065 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 14:55:44 functional-562018 containerd[5205]: time="2025-12-13T14:55:44.662059921Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 14:55:45 functional-562018 containerd[5205]: time="2025-12-13T14:55:45.818542030Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 13 14:55:45 functional-562018 containerd[5205]: time="2025-12-13T14:55:45.820720139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 13 14:55:45 functional-562018 containerd[5205]: time="2025-12-13T14:55:45.827533213Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 14:55:45 functional-562018 containerd[5205]: time="2025-12-13T14:55:45.827946488Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 14:55:46 functional-562018 containerd[5205]: time="2025-12-13T14:55:46.766575667Z" level=info msg="No images store for sha256:3e1817b2097897bb33703eb5a3a650e117d1a4379ef0e281fcf78680554b6f9d"
	Dec 13 14:55:46 functional-562018 containerd[5205]: time="2025-12-13T14:55:46.768780549Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-562018\""
	Dec 13 14:55:46 functional-562018 containerd[5205]: time="2025-12-13T14:55:46.775718289Z" level=info msg="ImageCreate event name:\"sha256:e026052059b45d788f94e5aa4af0bc6e32bbfa2d449adbca80836f551dadd042\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 14:55:46 functional-562018 containerd[5205]: time="2025-12-13T14:55:46.776342439Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-562018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 14:55:47 functional-562018 containerd[5205]: time="2025-12-13T14:55:47.577118370Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 13 14:55:47 functional-562018 containerd[5205]: time="2025-12-13T14:55:47.579725163Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 13 14:55:47 functional-562018 containerd[5205]: time="2025-12-13T14:55:47.581600722Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 13 14:55:47 functional-562018 containerd[5205]: time="2025-12-13T14:55:47.593674355Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 13 14:55:48 functional-562018 containerd[5205]: time="2025-12-13T14:55:48.503955591Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 13 14:55:48 functional-562018 containerd[5205]: time="2025-12-13T14:55:48.506314808Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 13 14:55:48 functional-562018 containerd[5205]: time="2025-12-13T14:55:48.508621447Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 13 14:55:48 functional-562018 containerd[5205]: time="2025-12-13T14:55:48.524501960Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 13 14:55:48 functional-562018 containerd[5205]: time="2025-12-13T14:55:48.692540483Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 13 14:55:48 functional-562018 containerd[5205]: time="2025-12-13T14:55:48.694713603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 13 14:55:48 functional-562018 containerd[5205]: time="2025-12-13T14:55:48.701499321Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 14:55:48 functional-562018 containerd[5205]: time="2025-12-13T14:55:48.701853086Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 14:55:48 functional-562018 containerd[5205]: time="2025-12-13T14:55:48.826201242Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 13 14:55:48 functional-562018 containerd[5205]: time="2025-12-13T14:55:48.828373279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 13 14:55:48 functional-562018 containerd[5205]: time="2025-12-13T14:55:48.836071264Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 14:55:48 functional-562018 containerd[5205]: time="2025-12-13T14:55:48.836725641Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:55:50.589625    9213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:55:50.590335    9213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:55:50.591372    9213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:55:50.591993    9213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:55:50.593581    9213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.759368] overlayfs: idmapped layers are currently not supported
	[Dec13 13:37] overlayfs: idmapped layers are currently not supported
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 14:55:50 up  6:38,  0 user,  load average: 0.27, 0.28, 0.75
	Linux functional-562018 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 14:55:47 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 14:55:47 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 823.
	Dec 13 14:55:47 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:47 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:47 functional-562018 kubelet[8978]: E1213 14:55:47.887250    8978 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 14:55:47 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 14:55:47 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 14:55:48 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 824.
	Dec 13 14:55:48 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:48 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:48 functional-562018 kubelet[9062]: E1213 14:55:48.642182    9062 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 14:55:48 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 14:55:48 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 14:55:49 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 825.
	Dec 13 14:55:49 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:49 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:49 functional-562018 kubelet[9110]: E1213 14:55:49.393566    9110 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 14:55:49 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 14:55:49 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 14:55:50 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 13 14:55:50 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:50 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:50 functional-562018 kubelet[9130]: E1213 14:55:50.150060    9130 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 14:55:50 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 14:55:50 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018: exit status 2 (334.105165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-562018" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-562018 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-562018 get pods: exit status 1 (110.999902ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-562018 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-562018
helpers_test.go:244: (dbg) docker inspect functional-562018:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
	        "Created": "2025-12-13T14:41:15.451086653Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1291703,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T14:41:15.527927053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hostname",
	        "HostsPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hosts",
	        "LogPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648-json.log",
	        "Name": "/functional-562018",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-562018:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-562018",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
	                "LowerDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-562018",
	                "Source": "/var/lib/docker/volumes/functional-562018/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-562018",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-562018",
	                "name.minikube.sigs.k8s.io": "functional-562018",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f4b22297a29553cdd0dbc4eaa766abcdb1e67465ee18f1e2f5f9917dc8cf6d08",
	            "SandboxKey": "/var/run/docker/netns/f4b22297a295",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33918"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33919"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33922"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33920"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33921"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-562018": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:f3:95:ff:30:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd2e7aed753870546b4bccdeed8073ad5795c14334e906d06b964f72dc448c38",
	                    "EndpointID": "a0e947c1e40f773105c811b67b7d1d63f19d3a20060380bbde944bf9bfe39be5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-562018",
	                        "2cd1277ca783"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018: exit status 2 (337.368707ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-831661 image ls --format json --alsologtostderr                                                                                              │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image   │ functional-831661 image ls --format short --alsologtostderr                                                                                             │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image   │ functional-831661 image ls --format table --alsologtostderr                                                                                             │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ ssh     │ functional-831661 ssh pgrep buildkitd                                                                                                                   │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │                     │
	│ image   │ functional-831661 image ls --format yaml --alsologtostderr                                                                                              │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image   │ functional-831661 image build -t localhost/my-image:functional-831661 testdata/build --alsologtostderr                                                  │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image   │ functional-831661 image ls                                                                                                                              │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ delete  │ -p functional-831661                                                                                                                                    │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ start   │ -p functional-562018 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │                     │
	│ start   │ -p functional-562018 --alsologtostderr -v=8                                                                                                             │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:49 UTC │                     │
	│ cache   │ functional-562018 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ functional-562018 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ functional-562018 cache add registry.k8s.io/pause:latest                                                                                                │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ functional-562018 cache add minikube-local-cache-test:functional-562018                                                                                 │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ functional-562018 cache delete minikube-local-cache-test:functional-562018                                                                              │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ ssh     │ functional-562018 ssh sudo crictl images                                                                                                                │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ ssh     │ functional-562018 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ ssh     │ functional-562018 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │                     │
	│ cache   │ functional-562018 cache reload                                                                                                                          │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ ssh     │ functional-562018 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ kubectl │ functional-562018 kubectl -- --context functional-562018 get pods                                                                                       │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 14:49:32
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 14:49:32.175934 1297065 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:49:32.176062 1297065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:49:32.176074 1297065 out.go:374] Setting ErrFile to fd 2...
	I1213 14:49:32.176081 1297065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:49:32.176329 1297065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 14:49:32.176775 1297065 out.go:368] Setting JSON to false
	I1213 14:49:32.177662 1297065 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23521,"bootTime":1765613851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 14:49:32.177756 1297065 start.go:143] virtualization:  
	I1213 14:49:32.181250 1297065 out.go:179] * [functional-562018] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 14:49:32.184279 1297065 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 14:49:32.184349 1297065 notify.go:221] Checking for updates...
	I1213 14:49:32.190681 1297065 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:49:32.193733 1297065 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:32.196589 1297065 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 14:49:32.199444 1297065 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 14:49:32.202364 1297065 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 14:49:32.205680 1297065 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:49:32.205788 1297065 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:49:32.233101 1297065 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 14:49:32.233224 1297065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:49:32.299716 1297065 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 14:49:32.290425951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:49:32.299832 1297065 docker.go:319] overlay module found
	I1213 14:49:32.305094 1297065 out.go:179] * Using the docker driver based on existing profile
	I1213 14:49:32.307726 1297065 start.go:309] selected driver: docker
	I1213 14:49:32.307744 1297065 start.go:927] validating driver "docker" against &{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:49:32.307856 1297065 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 14:49:32.307958 1297065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:49:32.364202 1297065 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 14:49:32.354888078 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:49:32.364608 1297065 cni.go:84] Creating CNI manager for ""
	I1213 14:49:32.364673 1297065 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 14:49:32.364721 1297065 start.go:353] cluster config:
	{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:49:32.367887 1297065 out.go:179] * Starting "functional-562018" primary control-plane node in "functional-562018" cluster
	I1213 14:49:32.370579 1297065 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 14:49:32.373599 1297065 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 14:49:32.376553 1297065 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 14:49:32.376606 1297065 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 14:49:32.376621 1297065 cache.go:65] Caching tarball of preloaded images
	I1213 14:49:32.376630 1297065 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 14:49:32.376703 1297065 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 14:49:32.376713 1297065 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 14:49:32.376820 1297065 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/config.json ...
	I1213 14:49:32.396105 1297065 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 14:49:32.396128 1297065 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 14:49:32.396160 1297065 cache.go:243] Successfully downloaded all kic artifacts
	I1213 14:49:32.396191 1297065 start.go:360] acquireMachinesLock for functional-562018: {Name:mk6a7956e4fce5d8e0f4d6fe039ab67ad6cd688b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 14:49:32.396254 1297065 start.go:364] duration metric: took 40.319µs to acquireMachinesLock for "functional-562018"
	I1213 14:49:32.396277 1297065 start.go:96] Skipping create...Using existing machine configuration
	I1213 14:49:32.396287 1297065 fix.go:54] fixHost starting: 
	I1213 14:49:32.396543 1297065 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:49:32.413077 1297065 fix.go:112] recreateIfNeeded on functional-562018: state=Running err=<nil>
	W1213 14:49:32.413105 1297065 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 14:49:32.416298 1297065 out.go:252] * Updating the running docker "functional-562018" container ...
	I1213 14:49:32.416337 1297065 machine.go:94] provisionDockerMachine start ...
	I1213 14:49:32.416434 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:32.434428 1297065 main.go:143] libmachine: Using SSH client type: native
	I1213 14:49:32.434755 1297065 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:49:32.434764 1297065 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 14:49:32.588560 1297065 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-562018
	
	I1213 14:49:32.588587 1297065 ubuntu.go:182] provisioning hostname "functional-562018"
	I1213 14:49:32.588651 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:32.607983 1297065 main.go:143] libmachine: Using SSH client type: native
	I1213 14:49:32.608286 1297065 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:49:32.608297 1297065 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-562018 && echo "functional-562018" | sudo tee /etc/hostname
	I1213 14:49:32.769183 1297065 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-562018
	
	I1213 14:49:32.769274 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:32.789428 1297065 main.go:143] libmachine: Using SSH client type: native
	I1213 14:49:32.789750 1297065 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:49:32.789773 1297065 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-562018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-562018/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-562018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 14:49:32.943886 1297065 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 14:49:32.943914 1297065 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 14:49:32.943934 1297065 ubuntu.go:190] setting up certificates
	I1213 14:49:32.943953 1297065 provision.go:84] configureAuth start
	I1213 14:49:32.944016 1297065 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
	I1213 14:49:32.962011 1297065 provision.go:143] copyHostCerts
	I1213 14:49:32.962065 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 14:49:32.962109 1297065 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 14:49:32.962123 1297065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 14:49:32.962204 1297065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 14:49:32.962309 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 14:49:32.962331 1297065 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 14:49:32.962339 1297065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 14:49:32.962367 1297065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 14:49:32.962422 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 14:49:32.962443 1297065 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 14:49:32.962451 1297065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 14:49:32.962476 1297065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 14:49:32.962539 1297065 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.functional-562018 san=[127.0.0.1 192.168.49.2 functional-562018 localhost minikube]
	I1213 14:49:33.179564 1297065 provision.go:177] copyRemoteCerts
	I1213 14:49:33.179638 1297065 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 14:49:33.179690 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.200012 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.307268 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 14:49:33.307352 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 14:49:33.325080 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 14:49:33.325187 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 14:49:33.348055 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 14:49:33.348124 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 14:49:33.368733 1297065 provision.go:87] duration metric: took 424.756928ms to configureAuth
	I1213 14:49:33.368776 1297065 ubuntu.go:206] setting minikube options for container-runtime
	I1213 14:49:33.368958 1297065 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:49:33.368972 1297065 machine.go:97] duration metric: took 952.628419ms to provisionDockerMachine
	I1213 14:49:33.368979 1297065 start.go:293] postStartSetup for "functional-562018" (driver="docker")
	I1213 14:49:33.368990 1297065 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 14:49:33.369043 1297065 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 14:49:33.369100 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.388800 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.495227 1297065 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 14:49:33.498339 1297065 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 14:49:33.498360 1297065 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 14:49:33.498365 1297065 command_runner.go:130] > VERSION_ID="12"
	I1213 14:49:33.498369 1297065 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 14:49:33.498374 1297065 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 14:49:33.498378 1297065 command_runner.go:130] > ID=debian
	I1213 14:49:33.498382 1297065 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 14:49:33.498387 1297065 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 14:49:33.498400 1297065 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 14:49:33.498729 1297065 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 14:49:33.498752 1297065 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 14:49:33.498764 1297065 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 14:49:33.498818 1297065 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 14:49:33.498907 1297065 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 14:49:33.498914 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> /etc/ssl/certs/12529342.pem
	I1213 14:49:33.498991 1297065 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts -> hosts in /etc/test/nested/copy/1252934
	I1213 14:49:33.498996 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts -> /etc/test/nested/copy/1252934/hosts
	I1213 14:49:33.499038 1297065 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1252934
	I1213 14:49:33.506503 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 14:49:33.524063 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts --> /etc/test/nested/copy/1252934/hosts (40 bytes)
	I1213 14:49:33.542234 1297065 start.go:296] duration metric: took 173.238726ms for postStartSetup
	I1213 14:49:33.542347 1297065 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 14:49:33.542395 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.560689 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.668283 1297065 command_runner.go:130] > 18%
	I1213 14:49:33.668429 1297065 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 14:49:33.673015 1297065 command_runner.go:130] > 160G
	I1213 14:49:33.673516 1297065 fix.go:56] duration metric: took 1.277224674s for fixHost
	I1213 14:49:33.673545 1297065 start.go:83] releasing machines lock for "functional-562018", held for 1.277279647s
	I1213 14:49:33.673651 1297065 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
	I1213 14:49:33.691077 1297065 ssh_runner.go:195] Run: cat /version.json
	I1213 14:49:33.691140 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.691468 1297065 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 14:49:33.691538 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:33.709148 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.719417 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:33.814811 1297065 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 14:49:33.814943 1297065 ssh_runner.go:195] Run: systemctl --version
	I1213 14:49:33.903672 1297065 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 14:49:33.906947 1297065 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 14:49:33.906982 1297065 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 14:49:33.907055 1297065 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 14:49:33.911546 1297065 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 14:49:33.911590 1297065 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 14:49:33.911661 1297065 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 14:49:33.919539 1297065 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 14:49:33.919560 1297065 start.go:496] detecting cgroup driver to use...
	I1213 14:49:33.919591 1297065 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 14:49:33.919652 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 14:49:33.935466 1297065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 14:49:33.948503 1297065 docker.go:218] disabling cri-docker service (if available) ...
	I1213 14:49:33.948565 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 14:49:33.964251 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 14:49:33.977532 1297065 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 14:49:34.098935 1297065 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 14:49:34.240532 1297065 docker.go:234] disabling docker service ...
	I1213 14:49:34.240643 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 14:49:34.257037 1297065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 14:49:34.270650 1297065 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 14:49:34.390022 1297065 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 14:49:34.521564 1297065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 14:49:34.535848 1297065 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 14:49:34.549721 1297065 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1213 14:49:34.551043 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 14:49:34.560293 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 14:49:34.569539 1297065 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 14:49:34.569607 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 14:49:34.578725 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 14:49:34.587464 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 14:49:34.595867 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 14:49:34.604914 1297065 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 14:49:34.612837 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 14:49:34.621746 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 14:49:34.631405 1297065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 14:49:34.640934 1297065 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 14:49:34.647949 1297065 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 14:49:34.649110 1297065 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 14:49:34.656959 1297065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:49:34.763520 1297065 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 14:49:34.891785 1297065 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 14:49:34.891886 1297065 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 14:49:34.896000 1297065 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1213 14:49:34.896045 1297065 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 14:49:34.896074 1297065 command_runner.go:130] > Device: 0,72	Inode: 1612        Links: 1
	I1213 14:49:34.896088 1297065 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 14:49:34.896099 1297065 command_runner.go:130] > Access: 2025-12-13 14:49:34.846323232 +0000
	I1213 14:49:34.896109 1297065 command_runner.go:130] > Modify: 2025-12-13 14:49:34.846323232 +0000
	I1213 14:49:34.896114 1297065 command_runner.go:130] > Change: 2025-12-13 14:49:34.846323232 +0000
	I1213 14:49:34.896117 1297065 command_runner.go:130] >  Birth: -
	I1213 14:49:34.896860 1297065 start.go:564] Will wait 60s for crictl version
	I1213 14:49:34.896947 1297065 ssh_runner.go:195] Run: which crictl
	I1213 14:49:34.901248 1297065 command_runner.go:130] > /usr/local/bin/crictl
	I1213 14:49:34.901933 1297065 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 14:49:34.925912 1297065 command_runner.go:130] > Version:  0.1.0
	I1213 14:49:34.925937 1297065 command_runner.go:130] > RuntimeName:  containerd
	I1213 14:49:34.925943 1297065 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1213 14:49:34.925948 1297065 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 14:49:34.928438 1297065 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 14:49:34.928554 1297065 ssh_runner.go:195] Run: containerd --version
	I1213 14:49:34.949487 1297065 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 14:49:34.951799 1297065 ssh_runner.go:195] Run: containerd --version
	I1213 14:49:34.970090 1297065 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 14:49:34.977895 1297065 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 14:49:34.980777 1297065 cli_runner.go:164] Run: docker network inspect functional-562018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 14:49:34.997091 1297065 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 14:49:35.003196 1297065 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 14:49:35.003415 1297065 kubeadm.go:884] updating cluster {Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 14:49:35.003575 1297065 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 14:49:35.003657 1297065 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:49:35.028469 1297065 command_runner.go:130] > {
	I1213 14:49:35.028488 1297065 command_runner.go:130] >   "images":  [
	I1213 14:49:35.028493 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028502 1297065 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 14:49:35.028509 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028514 1297065 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 14:49:35.028518 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028522 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028533 1297065 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 14:49:35.028536 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028541 1297065 command_runner.go:130] >       "size":  "40636774",
	I1213 14:49:35.028545 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028549 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028552 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028555 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028563 1297065 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 14:49:35.028567 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028572 1297065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 14:49:35.028574 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028583 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028592 1297065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 14:49:35.028595 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028599 1297065 command_runner.go:130] >       "size":  "8034419",
	I1213 14:49:35.028603 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028607 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028610 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028613 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028620 1297065 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 14:49:35.028624 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028630 1297065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 14:49:35.028633 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028641 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028649 1297065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 14:49:35.028652 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028656 1297065 command_runner.go:130] >       "size":  "21168808",
	I1213 14:49:35.028660 1297065 command_runner.go:130] >       "username":  "nonroot",
	I1213 14:49:35.028664 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028667 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028670 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028677 1297065 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 14:49:35.028680 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028685 1297065 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 14:49:35.028688 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028691 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028698 1297065 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 14:49:35.028701 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028706 1297065 command_runner.go:130] >       "size":  "21136588",
	I1213 14:49:35.028710 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.028714 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.028717 1297065 command_runner.go:130] >       },
	I1213 14:49:35.028721 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028725 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028731 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028734 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028741 1297065 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 14:49:35.028745 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028750 1297065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 14:49:35.028753 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028757 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028764 1297065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 14:49:35.028768 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028772 1297065 command_runner.go:130] >       "size":  "24678359",
	I1213 14:49:35.028775 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.028783 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.028786 1297065 command_runner.go:130] >       },
	I1213 14:49:35.028790 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028794 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028797 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028799 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028806 1297065 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 14:49:35.028809 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028815 1297065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 14:49:35.028818 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028822 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028829 1297065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 14:49:35.028833 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028837 1297065 command_runner.go:130] >       "size":  "20661043",
	I1213 14:49:35.028841 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.028844 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.028847 1297065 command_runner.go:130] >       },
	I1213 14:49:35.028852 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028855 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028858 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028861 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028867 1297065 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 14:49:35.028877 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028883 1297065 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 14:49:35.028886 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028890 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028897 1297065 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 14:49:35.028900 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028905 1297065 command_runner.go:130] >       "size":  "22429671",
	I1213 14:49:35.028908 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028912 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028915 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028919 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028926 1297065 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 14:49:35.028929 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028934 1297065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 14:49:35.028937 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028941 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.028948 1297065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 14:49:35.028951 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028955 1297065 command_runner.go:130] >       "size":  "15391364",
	I1213 14:49:35.028959 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.028962 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.028965 1297065 command_runner.go:130] >       },
	I1213 14:49:35.028969 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.028972 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.028975 1297065 command_runner.go:130] >     },
	I1213 14:49:35.028978 1297065 command_runner.go:130] >     {
	I1213 14:49:35.028984 1297065 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 14:49:35.028987 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.028992 1297065 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 14:49:35.028995 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.028998 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.029005 1297065 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 14:49:35.029009 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.029016 1297065 command_runner.go:130] >       "size":  "267939",
	I1213 14:49:35.029019 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.029023 1297065 command_runner.go:130] >         "value":  "65535"
	I1213 14:49:35.029030 1297065 command_runner.go:130] >       },
	I1213 14:49:35.029034 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.029037 1297065 command_runner.go:130] >       "pinned":  true
	I1213 14:49:35.029040 1297065 command_runner.go:130] >     }
	I1213 14:49:35.029043 1297065 command_runner.go:130] >   ]
	I1213 14:49:35.029046 1297065 command_runner.go:130] > }
	I1213 14:49:35.031562 1297065 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 14:49:35.031587 1297065 containerd.go:534] Images already preloaded, skipping extraction
	I1213 14:49:35.031647 1297065 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:49:35.054892 1297065 command_runner.go:130] > {
	I1213 14:49:35.054913 1297065 command_runner.go:130] >   "images":  [
	I1213 14:49:35.054918 1297065 command_runner.go:130] >     {
	I1213 14:49:35.054928 1297065 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 14:49:35.054933 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.054939 1297065 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 14:49:35.054943 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.054947 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.054959 1297065 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 14:49:35.054966 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.054970 1297065 command_runner.go:130] >       "size":  "40636774",
	I1213 14:49:35.054977 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.054982 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.054993 1297065 command_runner.go:130] >     },
	I1213 14:49:35.054996 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055014 1297065 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 14:49:35.055021 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055030 1297065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 14:49:35.055033 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055037 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055045 1297065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 14:49:35.055049 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055053 1297065 command_runner.go:130] >       "size":  "8034419",
	I1213 14:49:35.055057 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055060 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055064 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055067 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055074 1297065 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 14:49:35.055081 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055086 1297065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 14:49:35.055092 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055104 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055117 1297065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 14:49:35.055121 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055125 1297065 command_runner.go:130] >       "size":  "21168808",
	I1213 14:49:35.055135 1297065 command_runner.go:130] >       "username":  "nonroot",
	I1213 14:49:35.055139 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055143 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055151 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055158 1297065 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 14:49:35.055162 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055169 1297065 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 14:49:35.055173 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055177 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055187 1297065 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 14:49:35.055193 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055201 1297065 command_runner.go:130] >       "size":  "21136588",
	I1213 14:49:35.055205 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055210 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.055217 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055221 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055225 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055231 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055234 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055241 1297065 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 14:49:35.055246 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055254 1297065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 14:49:35.055257 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055261 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055272 1297065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 14:49:35.055278 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055283 1297065 command_runner.go:130] >       "size":  "24678359",
	I1213 14:49:35.055286 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055294 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.055300 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055304 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055329 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055335 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055339 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055346 1297065 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 14:49:35.055352 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055358 1297065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 14:49:35.055371 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055375 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055383 1297065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 14:49:35.055388 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055392 1297065 command_runner.go:130] >       "size":  "20661043",
	I1213 14:49:35.055399 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055403 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.055410 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055415 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055422 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055425 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055428 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055435 1297065 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 14:49:35.055446 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055452 1297065 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 14:49:35.055455 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055460 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055469 1297065 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 14:49:35.055477 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055482 1297065 command_runner.go:130] >       "size":  "22429671",
	I1213 14:49:35.055486 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055494 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055497 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055500 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055511 1297065 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 14:49:35.055515 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055524 1297065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 14:49:35.055529 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055533 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055541 1297065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 14:49:35.055547 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055551 1297065 command_runner.go:130] >       "size":  "15391364",
	I1213 14:49:35.055554 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055559 1297065 command_runner.go:130] >         "value":  "0"
	I1213 14:49:35.055564 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055568 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055574 1297065 command_runner.go:130] >       "pinned":  false
	I1213 14:49:35.055578 1297065 command_runner.go:130] >     },
	I1213 14:49:35.055581 1297065 command_runner.go:130] >     {
	I1213 14:49:35.055587 1297065 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 14:49:35.055595 1297065 command_runner.go:130] >       "repoTags":  [
	I1213 14:49:35.055602 1297065 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 14:49:35.055608 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055612 1297065 command_runner.go:130] >       "repoDigests":  [
	I1213 14:49:35.055620 1297065 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 14:49:35.055626 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.055630 1297065 command_runner.go:130] >       "size":  "267939",
	I1213 14:49:35.055633 1297065 command_runner.go:130] >       "uid":  {
	I1213 14:49:35.055637 1297065 command_runner.go:130] >         "value":  "65535"
	I1213 14:49:35.055651 1297065 command_runner.go:130] >       },
	I1213 14:49:35.055655 1297065 command_runner.go:130] >       "username":  "",
	I1213 14:49:35.055659 1297065 command_runner.go:130] >       "pinned":  true
	I1213 14:49:35.055662 1297065 command_runner.go:130] >     }
	I1213 14:49:35.055666 1297065 command_runner.go:130] >   ]
	I1213 14:49:35.055669 1297065 command_runner.go:130] > }
	I1213 14:49:35.057995 1297065 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 14:49:35.058021 1297065 cache_images.go:86] Images are preloaded, skipping loading
	I1213 14:49:35.058031 1297065 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 14:49:35.058154 1297065 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-562018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 14:49:35.058232 1297065 ssh_runner.go:195] Run: sudo crictl info
	I1213 14:49:35.082362 1297065 command_runner.go:130] > {
	I1213 14:49:35.082385 1297065 command_runner.go:130] >   "cniconfig": {
	I1213 14:49:35.082391 1297065 command_runner.go:130] >     "Networks": [
	I1213 14:49:35.082395 1297065 command_runner.go:130] >       {
	I1213 14:49:35.082401 1297065 command_runner.go:130] >         "Config": {
	I1213 14:49:35.082405 1297065 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1213 14:49:35.082411 1297065 command_runner.go:130] >           "Name": "cni-loopback",
	I1213 14:49:35.082415 1297065 command_runner.go:130] >           "Plugins": [
	I1213 14:49:35.082419 1297065 command_runner.go:130] >             {
	I1213 14:49:35.082423 1297065 command_runner.go:130] >               "Network": {
	I1213 14:49:35.082427 1297065 command_runner.go:130] >                 "ipam": {},
	I1213 14:49:35.082432 1297065 command_runner.go:130] >                 "type": "loopback"
	I1213 14:49:35.082436 1297065 command_runner.go:130] >               },
	I1213 14:49:35.082446 1297065 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1213 14:49:35.082450 1297065 command_runner.go:130] >             }
	I1213 14:49:35.082457 1297065 command_runner.go:130] >           ],
	I1213 14:49:35.082467 1297065 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1213 14:49:35.082473 1297065 command_runner.go:130] >         },
	I1213 14:49:35.082488 1297065 command_runner.go:130] >         "IFName": "lo"
	I1213 14:49:35.082495 1297065 command_runner.go:130] >       }
	I1213 14:49:35.082498 1297065 command_runner.go:130] >     ],
	I1213 14:49:35.082503 1297065 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1213 14:49:35.082507 1297065 command_runner.go:130] >     "PluginDirs": [
	I1213 14:49:35.082511 1297065 command_runner.go:130] >       "/opt/cni/bin"
	I1213 14:49:35.082516 1297065 command_runner.go:130] >     ],
	I1213 14:49:35.082520 1297065 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1213 14:49:35.082527 1297065 command_runner.go:130] >     "Prefix": "eth"
	I1213 14:49:35.082530 1297065 command_runner.go:130] >   },
	I1213 14:49:35.082533 1297065 command_runner.go:130] >   "config": {
	I1213 14:49:35.082537 1297065 command_runner.go:130] >     "cdiSpecDirs": [
	I1213 14:49:35.082544 1297065 command_runner.go:130] >       "/etc/cdi",
	I1213 14:49:35.082549 1297065 command_runner.go:130] >       "/var/run/cdi"
	I1213 14:49:35.082552 1297065 command_runner.go:130] >     ],
	I1213 14:49:35.082559 1297065 command_runner.go:130] >     "cni": {
	I1213 14:49:35.082562 1297065 command_runner.go:130] >       "binDir": "",
	I1213 14:49:35.082566 1297065 command_runner.go:130] >       "binDirs": [
	I1213 14:49:35.082570 1297065 command_runner.go:130] >         "/opt/cni/bin"
	I1213 14:49:35.082573 1297065 command_runner.go:130] >       ],
	I1213 14:49:35.082578 1297065 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1213 14:49:35.082581 1297065 command_runner.go:130] >       "confTemplate": "",
	I1213 14:49:35.082586 1297065 command_runner.go:130] >       "ipPref": "",
	I1213 14:49:35.082589 1297065 command_runner.go:130] >       "maxConfNum": 1,
	I1213 14:49:35.082593 1297065 command_runner.go:130] >       "setupSerially": false,
	I1213 14:49:35.082601 1297065 command_runner.go:130] >       "useInternalLoopback": false
	I1213 14:49:35.082604 1297065 command_runner.go:130] >     },
	I1213 14:49:35.082611 1297065 command_runner.go:130] >     "containerd": {
	I1213 14:49:35.082617 1297065 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1213 14:49:35.082622 1297065 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1213 14:49:35.082629 1297065 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1213 14:49:35.082634 1297065 command_runner.go:130] >       "runtimes": {
	I1213 14:49:35.082637 1297065 command_runner.go:130] >         "runc": {
	I1213 14:49:35.082648 1297065 command_runner.go:130] >           "ContainerAnnotations": null,
	I1213 14:49:35.082654 1297065 command_runner.go:130] >           "PodAnnotations": null,
	I1213 14:49:35.082659 1297065 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1213 14:49:35.082672 1297065 command_runner.go:130] >           "cgroupWritable": false,
	I1213 14:49:35.082676 1297065 command_runner.go:130] >           "cniConfDir": "",
	I1213 14:49:35.082680 1297065 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1213 14:49:35.082684 1297065 command_runner.go:130] >           "io_type": "",
	I1213 14:49:35.082688 1297065 command_runner.go:130] >           "options": {
	I1213 14:49:35.082693 1297065 command_runner.go:130] >             "BinaryName": "",
	I1213 14:49:35.082699 1297065 command_runner.go:130] >             "CriuImagePath": "",
	I1213 14:49:35.082703 1297065 command_runner.go:130] >             "CriuWorkPath": "",
	I1213 14:49:35.082707 1297065 command_runner.go:130] >             "IoGid": 0,
	I1213 14:49:35.082714 1297065 command_runner.go:130] >             "IoUid": 0,
	I1213 14:49:35.082719 1297065 command_runner.go:130] >             "NoNewKeyring": false,
	I1213 14:49:35.082725 1297065 command_runner.go:130] >             "Root": "",
	I1213 14:49:35.082729 1297065 command_runner.go:130] >             "ShimCgroup": "",
	I1213 14:49:35.082743 1297065 command_runner.go:130] >             "SystemdCgroup": false
	I1213 14:49:35.082746 1297065 command_runner.go:130] >           },
	I1213 14:49:35.082751 1297065 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1213 14:49:35.082758 1297065 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1213 14:49:35.082765 1297065 command_runner.go:130] >           "runtimePath": "",
	I1213 14:49:35.082769 1297065 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1213 14:49:35.082774 1297065 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1213 14:49:35.082778 1297065 command_runner.go:130] >           "snapshotter": ""
	I1213 14:49:35.082784 1297065 command_runner.go:130] >         }
	I1213 14:49:35.082787 1297065 command_runner.go:130] >       }
	I1213 14:49:35.082790 1297065 command_runner.go:130] >     },
	I1213 14:49:35.082801 1297065 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1213 14:49:35.082809 1297065 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1213 14:49:35.082816 1297065 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1213 14:49:35.082820 1297065 command_runner.go:130] >     "disableApparmor": false,
	I1213 14:49:35.082825 1297065 command_runner.go:130] >     "disableHugetlbController": true,
	I1213 14:49:35.082832 1297065 command_runner.go:130] >     "disableProcMount": false,
	I1213 14:49:35.082839 1297065 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1213 14:49:35.082845 1297065 command_runner.go:130] >     "enableCDI": true,
	I1213 14:49:35.082850 1297065 command_runner.go:130] >     "enableSelinux": false,
	I1213 14:49:35.082857 1297065 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1213 14:49:35.082862 1297065 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1213 14:49:35.082866 1297065 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1213 14:49:35.082871 1297065 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1213 14:49:35.082875 1297065 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1213 14:49:35.082880 1297065 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1213 14:49:35.082887 1297065 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1213 14:49:35.082893 1297065 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1213 14:49:35.082904 1297065 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1213 14:49:35.082910 1297065 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1213 14:49:35.082915 1297065 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1213 14:49:35.082926 1297065 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1213 14:49:35.082932 1297065 command_runner.go:130] >   },
	I1213 14:49:35.082936 1297065 command_runner.go:130] >   "features": {
	I1213 14:49:35.082943 1297065 command_runner.go:130] >     "supplemental_groups_policy": true
	I1213 14:49:35.082946 1297065 command_runner.go:130] >   },
	I1213 14:49:35.082950 1297065 command_runner.go:130] >   "golang": "go1.24.9",
	I1213 14:49:35.082959 1297065 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 14:49:35.082976 1297065 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 14:49:35.082980 1297065 command_runner.go:130] >   "runtimeHandlers": [
	I1213 14:49:35.082984 1297065 command_runner.go:130] >     {
	I1213 14:49:35.082988 1297065 command_runner.go:130] >       "features": {
	I1213 14:49:35.083000 1297065 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 14:49:35.083004 1297065 command_runner.go:130] >         "user_namespaces": true
	I1213 14:49:35.083008 1297065 command_runner.go:130] >       }
	I1213 14:49:35.083012 1297065 command_runner.go:130] >     },
	I1213 14:49:35.083017 1297065 command_runner.go:130] >     {
	I1213 14:49:35.083021 1297065 command_runner.go:130] >       "features": {
	I1213 14:49:35.083026 1297065 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 14:49:35.083033 1297065 command_runner.go:130] >         "user_namespaces": true
	I1213 14:49:35.083041 1297065 command_runner.go:130] >       },
	I1213 14:49:35.083055 1297065 command_runner.go:130] >       "name": "runc"
	I1213 14:49:35.083058 1297065 command_runner.go:130] >     }
	I1213 14:49:35.083061 1297065 command_runner.go:130] >   ],
	I1213 14:49:35.083064 1297065 command_runner.go:130] >   "status": {
	I1213 14:49:35.083068 1297065 command_runner.go:130] >     "conditions": [
	I1213 14:49:35.083077 1297065 command_runner.go:130] >       {
	I1213 14:49:35.083081 1297065 command_runner.go:130] >         "message": "",
	I1213 14:49:35.083085 1297065 command_runner.go:130] >         "reason": "",
	I1213 14:49:35.083089 1297065 command_runner.go:130] >         "status": true,
	I1213 14:49:35.083098 1297065 command_runner.go:130] >         "type": "RuntimeReady"
	I1213 14:49:35.083104 1297065 command_runner.go:130] >       },
	I1213 14:49:35.083107 1297065 command_runner.go:130] >       {
	I1213 14:49:35.083113 1297065 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1213 14:49:35.083118 1297065 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1213 14:49:35.083122 1297065 command_runner.go:130] >         "status": false,
	I1213 14:49:35.083128 1297065 command_runner.go:130] >         "type": "NetworkReady"
	I1213 14:49:35.083132 1297065 command_runner.go:130] >       },
	I1213 14:49:35.083135 1297065 command_runner.go:130] >       {
	I1213 14:49:35.083160 1297065 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1213 14:49:35.083171 1297065 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1213 14:49:35.083176 1297065 command_runner.go:130] >         "status": false,
	I1213 14:49:35.083182 1297065 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1213 14:49:35.083186 1297065 command_runner.go:130] >       }
	I1213 14:49:35.083190 1297065 command_runner.go:130] >     ]
	I1213 14:49:35.083196 1297065 command_runner.go:130] >   }
	I1213 14:49:35.083199 1297065 command_runner.go:130] > }
	I1213 14:49:35.086343 1297065 cni.go:84] Creating CNI manager for ""
	I1213 14:49:35.086370 1297065 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 14:49:35.086397 1297065 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 14:49:35.086420 1297065 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-562018 NodeName:functional-562018 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 14:49:35.086540 1297065 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-562018"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 14:49:35.086621 1297065 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 14:49:35.094718 1297065 command_runner.go:130] > kubeadm
	I1213 14:49:35.094739 1297065 command_runner.go:130] > kubectl
	I1213 14:49:35.094743 1297065 command_runner.go:130] > kubelet
	I1213 14:49:35.094761 1297065 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 14:49:35.094814 1297065 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 14:49:35.102589 1297065 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 14:49:35.115905 1297065 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 14:49:35.129462 1297065 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 14:49:35.142335 1297065 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 14:49:35.146161 1297065 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 14:49:35.146280 1297065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:49:35.271079 1297065 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:49:35.585791 1297065 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018 for IP: 192.168.49.2
	I1213 14:49:35.585864 1297065 certs.go:195] generating shared ca certs ...
	I1213 14:49:35.585895 1297065 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:49:35.586063 1297065 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 14:49:35.586138 1297065 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 14:49:35.586175 1297065 certs.go:257] generating profile certs ...
	I1213 14:49:35.586327 1297065 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key
	I1213 14:49:35.586437 1297065 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key.d0505aee
	I1213 14:49:35.586523 1297065 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key
	I1213 14:49:35.586557 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 14:49:35.586602 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 14:49:35.586632 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 14:49:35.586672 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 14:49:35.586707 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 14:49:35.586737 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 14:49:35.586777 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 14:49:35.586811 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 14:49:35.586902 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 14:49:35.586962 1297065 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 14:49:35.586986 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 14:49:35.587046 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 14:49:35.587098 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 14:49:35.587157 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 14:49:35.587232 1297065 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 14:49:35.587302 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.587371 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem -> /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.587399 1297065 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.588006 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 14:49:35.609077 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 14:49:35.630697 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 14:49:35.652426 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 14:49:35.670342 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 14:49:35.687837 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 14:49:35.705877 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 14:49:35.723466 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 14:49:35.740679 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 14:49:35.758304 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 14:49:35.776736 1297065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 14:49:35.794339 1297065 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 14:49:35.806740 1297065 ssh_runner.go:195] Run: openssl version
	I1213 14:49:35.812461 1297065 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 14:49:35.812883 1297065 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.820227 1297065 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 14:49:35.827978 1297065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.831610 1297065 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.831636 1297065 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.831688 1297065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:49:35.871766 1297065 command_runner.go:130] > b5213941
	I1213 14:49:35.872189 1297065 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 14:49:35.879531 1297065 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.886529 1297065 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 14:49:35.894015 1297065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.897550 1297065 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.897859 1297065 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.897930 1297065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 14:49:35.938203 1297065 command_runner.go:130] > 51391683
	I1213 14:49:35.938708 1297065 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 14:49:35.946069 1297065 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.953176 1297065 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 14:49:35.960486 1297065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.964477 1297065 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.964589 1297065 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 14:49:35.964665 1297065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 14:49:36.007360 1297065 command_runner.go:130] > 3ec20f2e
	I1213 14:49:36.007602 1297065 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 14:49:36.019390 1297065 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 14:49:36.024551 1297065 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 14:49:36.024587 1297065 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 14:49:36.024604 1297065 command_runner.go:130] > Device: 259,1	Inode: 2346070     Links: 1
	I1213 14:49:36.024612 1297065 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 14:49:36.024618 1297065 command_runner.go:130] > Access: 2025-12-13 14:45:28.579602026 +0000
	I1213 14:49:36.024623 1297065 command_runner.go:130] > Modify: 2025-12-13 14:41:24.464587226 +0000
	I1213 14:49:36.024628 1297065 command_runner.go:130] > Change: 2025-12-13 14:41:24.464587226 +0000
	I1213 14:49:36.024634 1297065 command_runner.go:130] >  Birth: 2025-12-13 14:41:24.464587226 +0000
	I1213 14:49:36.024743 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 14:49:36.067430 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.067964 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 14:49:36.109753 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.110299 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 14:49:36.151650 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.152123 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 14:49:36.199598 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.200366 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 14:49:36.241923 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.242478 1297065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 14:49:36.282927 1297065 command_runner.go:130] > Certificate will not expire
	I1213 14:49:36.283387 1297065 kubeadm.go:401] StartCluster: {Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:49:36.283480 1297065 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 14:49:36.283586 1297065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 14:49:36.308975 1297065 cri.go:89] found id: ""
	I1213 14:49:36.309092 1297065 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 14:49:36.316103 1297065 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 14:49:36.316129 1297065 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 14:49:36.316138 1297065 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 14:49:36.317085 1297065 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 14:49:36.317145 1297065 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 14:49:36.317231 1297065 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 14:49:36.324724 1297065 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:49:36.325158 1297065 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-562018" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:36.325271 1297065 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-1251074/kubeconfig needs updating (will repair): [kubeconfig missing "functional-562018" cluster setting kubeconfig missing "functional-562018" context setting]
	I1213 14:49:36.325603 1297065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:49:36.326011 1297065 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:36.326154 1297065 kapi.go:59] client config for functional-562018: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key", CAFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 14:49:36.326701 1297065 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 14:49:36.326719 1297065 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 14:49:36.326724 1297065 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 14:49:36.326733 1297065 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 14:49:36.326744 1297065 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 14:49:36.327001 1297065 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 14:49:36.327093 1297065 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 14:49:36.334496 1297065 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 14:49:36.334531 1297065 kubeadm.go:602] duration metric: took 17.366177ms to restartPrimaryControlPlane
	I1213 14:49:36.334540 1297065 kubeadm.go:403] duration metric: took 51.160034ms to StartCluster
	I1213 14:49:36.334555 1297065 settings.go:142] acquiring lock: {Name:mk3e38eb9635dd950ddf8081cc0598a8c10049c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:49:36.334613 1297065 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:36.335214 1297065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:49:36.335450 1297065 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 14:49:36.335789 1297065 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:49:36.335866 1297065 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 14:49:36.335932 1297065 addons.go:70] Setting storage-provisioner=true in profile "functional-562018"
	I1213 14:49:36.335945 1297065 addons.go:239] Setting addon storage-provisioner=true in "functional-562018"
	I1213 14:49:36.335975 1297065 host.go:66] Checking if "functional-562018" exists ...
	I1213 14:49:36.336461 1297065 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:49:36.336835 1297065 addons.go:70] Setting default-storageclass=true in profile "functional-562018"
	I1213 14:49:36.336857 1297065 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-562018"
	I1213 14:49:36.337151 1297065 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:49:36.340699 1297065 out.go:179] * Verifying Kubernetes components...
	I1213 14:49:36.343477 1297065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:49:36.374082 1297065 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 14:49:36.376797 1297065 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:49:36.376892 1297065 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:36.376917 1297065 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 14:49:36.376979 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:36.377245 1297065 kapi.go:59] client config for functional-562018: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key", CAFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 14:49:36.377532 1297065 addons.go:239] Setting addon default-storageclass=true in "functional-562018"
	I1213 14:49:36.377566 1297065 host.go:66] Checking if "functional-562018" exists ...
	I1213 14:49:36.377992 1297065 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:49:36.415567 1297065 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:36.415590 1297065 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 14:49:36.415656 1297065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:49:36.416969 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:36.442534 1297065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:49:36.534721 1297065 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:49:36.592567 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:36.600370 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:37.335898 1297065 node_ready.go:35] waiting up to 6m0s for node "functional-562018" to be "Ready" ...
	I1213 14:49:37.335934 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:37.336074 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.336106 1297065 retry.go:31] will retry after 199.574589ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.336165 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:37.336178 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.336184 1297065 retry.go:31] will retry after 285.216803ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.336272 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:37.336359 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:37.336679 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:37.536000 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:37.591050 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:37.594766 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.594797 1297065 retry.go:31] will retry after 489.410948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.621926 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:37.677113 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:37.681307 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.681342 1297065 retry.go:31] will retry after 401.770697ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:37.836587 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:37.836683 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:37.837004 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:38.083592 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:38.085139 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:38.190416 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:38.194296 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.194326 1297065 retry.go:31] will retry after 757.686696ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.207705 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:38.207792 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.207830 1297065 retry.go:31] will retry after 505.194475ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.337091 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:38.337186 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:38.337548 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:38.714015 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:38.783498 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:38.783559 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.783593 1297065 retry.go:31] will retry after 988.219406ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:38.836722 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:38.836873 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:38.837238 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:38.952600 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:39.020705 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:39.020749 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:39.020768 1297065 retry.go:31] will retry after 1.072702638s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:39.337164 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:39.337235 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:39.337545 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:39.337593 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:39.772102 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:39.836685 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:39.836850 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:39.837201 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:39.843566 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:39.843633 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:39.843675 1297065 retry.go:31] will retry after 1.296209829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:40.093780 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:40.156222 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:40.156329 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:40.156372 1297065 retry.go:31] will retry after 965.768616ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:40.336552 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:40.336651 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:40.336970 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:40.836822 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:40.836895 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:40.837217 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:41.122779 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:41.140323 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:41.215097 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:41.215182 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:41.215214 1297065 retry.go:31] will retry after 2.369565148s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:41.219568 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:41.219636 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:41.219656 1297065 retry.go:31] will retry after 2.455142313s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:41.336947 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:41.337019 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:41.337416 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:41.837047 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:41.837124 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:41.837388 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:41.837438 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:42.337111 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:42.337201 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:42.337621 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:42.836363 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:42.836444 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:42.836803 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:43.336226 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:43.336304 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:43.336552 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:43.585084 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:43.645189 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:43.649081 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:43.649137 1297065 retry.go:31] will retry after 3.995275361s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:43.675423 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:43.738811 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:43.738856 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:43.738876 1297065 retry.go:31] will retry after 3.319355388s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:43.837038 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:43.837127 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:43.837467 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:43.837521 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:44.336215 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:44.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:44.336669 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:44.836348 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:44.836422 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:44.836715 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:45.336277 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:45.336359 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:45.336715 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:45.836385 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:45.836482 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:45.836839 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:46.336842 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:46.336917 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:46.337174 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:46.337224 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:46.836567 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:46.836641 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:46.837050 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:47.058405 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:47.140540 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:47.144585 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:47.144615 1297065 retry.go:31] will retry after 3.814662677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:47.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:47.336338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:47.336673 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:47.645178 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:47.704569 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:47.708191 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:47.708226 1297065 retry.go:31] will retry after 4.571128182s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:47.836452 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:47.836522 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:47.836787 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:48.336260 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:48.336350 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:48.336628 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:48.836239 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:48.836318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:48.836676 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:48.836743 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:49.336220 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:49.336290 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:49.336531 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:49.836286 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:49.836362 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:49.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:50.336380 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:50.336455 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:50.336799 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:50.836292 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:50.836366 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:50.836651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:50.960127 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:49:51.026705 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:51.026749 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:51.026767 1297065 retry.go:31] will retry after 9.152833031s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:51.336157 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:51.336252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:51.336592 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:51.336645 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:51.836328 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:51.836439 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:51.836752 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:52.280634 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:52.336265 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:52.336348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:52.336649 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:52.351151 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:52.351191 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:52.351210 1297065 retry.go:31] will retry after 6.806315756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:52.837084 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:52.837176 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:52.837503 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:53.336231 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:53.336347 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:53.336679 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:53.336735 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:53.836278 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:53.836358 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:53.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:54.336375 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:54.336453 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:54.336787 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:54.836534 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:54.836609 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:54.836960 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:55.336530 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:55.336608 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:55.336965 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:55.337034 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:55.836817 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:55.836889 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:55.837215 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:56.337019 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:56.337095 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:56.337433 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:56.836157 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:56.836242 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:56.836511 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:57.336450 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:57.336529 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:57.336899 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:57.836240 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:57.836331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:57.836629 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:57.836681 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:49:58.336184 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:58.336276 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:58.336593 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:58.836286 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:58.836386 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:58.836728 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:59.158224 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:49:59.216557 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:49:59.216609 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:59.216627 1297065 retry.go:31] will retry after 13.782587086s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:49:59.336899 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:59.336976 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:59.337309 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:49:59.837050 1297065 type.go:168] "Request Body" body=""
	I1213 14:49:59.837117 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:49:59.837393 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:49:59.837436 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:00.179978 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:50:00.336210 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:00.336301 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:00.337482 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 14:50:00.358964 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:00.359008 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:00.359030 1297065 retry.go:31] will retry after 12.357990487s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:00.836789 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:00.836882 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:00.837203 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:01.336532 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:01.336609 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:01.336921 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:01.836255 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:01.836341 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:01.836652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:02.336512 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:02.336592 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:02.336956 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:02.337013 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:02.836530 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:02.836611 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:02.836888 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:03.336279 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:03.336358 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:03.336701 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:03.836401 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:03.836474 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:03.836845 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:04.336380 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:04.336454 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:04.336784 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:04.836248 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:04.836328 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:04.836660 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:04.836716 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:05.336407 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:05.336490 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:05.336806 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:05.836467 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:05.836548 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:05.836865 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:06.336870 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:06.336953 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:06.337350 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:06.837024 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:06.837097 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:06.837419 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:06.837478 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:07.336416 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:07.336488 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:07.336747 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:07.836490 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:07.836567 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:07.836906 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:08.336625 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:08.336699 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:08.337020 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:08.836518 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:08.836588 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:08.836865 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:09.336612 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:09.336692 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:09.337049 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:09.337109 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:09.836858 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:09.836939 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:09.837272 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:10.337051 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:10.337125 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:10.337387 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:10.837153 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:10.837234 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:10.837582 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:11.336183 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:11.336261 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:11.336582 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:11.836188 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:11.836256 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:11.836517 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:11.836567 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:12.336515 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:12.336595 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:12.336916 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:12.717305 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:50:12.775348 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:12.775393 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:12.775414 1297065 retry.go:31] will retry after 16.474515121s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:12.836567 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:12.836650 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:12.837019 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:13.000372 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:50:13.059399 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:13.063613 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:13.063652 1297065 retry.go:31] will retry after 8.071550656s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:13.336122 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:13.336199 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:13.336467 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:13.836136 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:13.836218 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:13.836591 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:13.836660 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:14.336363 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:14.336438 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:14.336774 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:14.836188 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:14.836261 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:14.836540 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:15.336277 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:15.336353 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:15.336677 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:15.836219 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:15.836293 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:15.836588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:16.336546 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:16.336617 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:16.336864 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:16.336904 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:16.836265 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:16.836348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:16.836648 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:17.336586 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:17.336661 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:17.337008 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:17.836520 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:17.836585 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:17.836858 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:18.336257 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:18.336354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:18.336715 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:18.836428 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:18.836502 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:18.836842 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:18.836899 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:19.336242 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:19.336311 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:19.336563 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:19.836231 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:19.836306 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:19.836619 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:20.336334 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:20.336416 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:20.336797 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:20.836189 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:20.836268 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:20.836618 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:21.136217 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:50:21.193283 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:21.196963 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:21.196996 1297065 retry.go:31] will retry after 15.530830741s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:21.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:21.336329 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:21.336627 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:21.336677 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:21.836352 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:21.836433 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:21.836751 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:22.336543 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:22.336615 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:22.336948 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:22.836275 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:22.836348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:22.836696 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:23.336400 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:23.336482 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:23.336828 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:23.336887 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:23.836198 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:23.836296 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:23.836661 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:24.336254 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:24.336329 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:24.336641 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:24.836327 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:24.836403 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:24.836743 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:25.336278 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:25.336354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:25.336703 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:25.836226 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:25.836309 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:25.836679 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:25.836740 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:26.337200 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:26.337293 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:26.337628 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:26.836405 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:26.836480 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:26.836777 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:27.336562 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:27.336653 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:27.337005 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:27.836230 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:27.836307 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:27.836657 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:28.336177 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:28.336267 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:28.336587 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:28.336638 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:28.836250 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:28.836332 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:28.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:29.250199 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:50:29.308318 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:29.311716 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:29.311747 1297065 retry.go:31] will retry after 30.463725654s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:29.336999 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:29.337080 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:29.337458 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:29.836155 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:29.836222 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:29.836520 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:30.336243 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:30.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:30.336620 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:30.336669 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:30.836228 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:30.836325 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:30.836623 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:31.336183 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:31.336285 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:31.336588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:31.836269 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:31.836340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:31.836687 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:32.336490 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:32.336568 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:32.336902 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:32.336957 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:32.836192 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:32.836262 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:32.836598 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:33.336250 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:33.336349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:33.336707 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:33.836245 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:33.836323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:33.836664 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:34.336184 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:34.336253 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:34.336535 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:34.836284 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:34.836360 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:34.836791 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:34.836848 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:35.336516 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:35.336596 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:35.336938 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:35.836527 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:35.836598 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:35.836864 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:36.336942 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:36.337020 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:36.337342 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:36.728993 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:50:36.785078 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:36.788836 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:36.788868 1297065 retry.go:31] will retry after 31.693829046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:36.837069 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:36.837145 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:36.837461 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:36.837513 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:37.336193 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:37.336260 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:37.336549 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:37.836234 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:37.836312 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:37.836628 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:38.336250 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:38.336329 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:38.336672 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:38.836242 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:38.836323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:38.836588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:39.336229 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:39.336304 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:39.336650 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:39.336709 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:39.836236 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:39.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:39.836618 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:40.336280 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:40.336355 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:40.336614 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:40.836272 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:40.836382 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:40.836789 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:41.336524 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:41.336601 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:41.336927 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:41.336987 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:41.836201 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:41.836278 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:41.836626 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:42.336633 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:42.336716 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:42.337072 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:42.836881 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:42.836955 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:42.837306 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:43.337071 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:43.337144 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:43.337415 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:43.337468 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:43.836983 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:43.837056 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:43.837412 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:44.336153 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:44.336229 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:44.336573 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:44.836267 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:44.836356 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:44.836695 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:45.336450 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:45.336538 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:45.336949 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:45.836752 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:45.836829 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:45.837178 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:45.837235 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:46.336981 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:46.337060 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:46.337351 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:46.836213 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:46.836319 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:46.836733 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:47.336523 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:47.336680 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:47.336969 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:47.836511 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:47.836579 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:47.836844 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:48.336215 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:48.336310 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:48.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:48.336704 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:48.836371 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:48.836487 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:48.836832 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:49.336188 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:49.336255 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:49.336544 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:49.836263 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:49.836365 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:49.836653 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:50.336392 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:50.336468 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:50.336808 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:50.336866 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:50.836325 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:50.836439 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:50.836713 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:51.336252 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:51.336346 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:51.336691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:51.836280 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:51.836353 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:51.836671 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:52.336550 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:52.336630 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:52.336893 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:52.336943 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:52.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:52.836322 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:52.836667 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:53.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:53.336342 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:53.336699 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:53.836191 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:53.836264 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:53.836543 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:54.336246 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:54.336345 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:54.336700 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:54.836402 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:54.836475 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:54.836811 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:54.836869 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:55.336360 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:55.336437 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:55.336727 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:55.836432 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:55.836512 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:55.836850 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:56.337034 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:56.337132 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:56.337451 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:56.836142 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:56.836214 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:56.836473 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:57.336469 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:57.336554 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:57.336893 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:57.336949 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:57.836297 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:57.836381 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:57.836714 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:58.336385 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:58.336465 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:58.336743 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:58.836460 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:58.836541 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:58.836889 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:59.336265 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:59.336340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:59.336697 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:50:59.776318 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:50:59.836157 1297065 type.go:168] "Request Body" body=""
	I1213 14:50:59.836232 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:50:59.836466 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:50:59.836509 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:50:59.839555 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:50:59.839592 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:50:59.839611 1297065 retry.go:31] will retry after 31.022889465s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 14:51:00.336284 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:00.336385 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:00.337017 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:00.836870 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:00.836951 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:00.837274 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:01.337018 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:01.337093 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:01.337377 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:01.836106 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:01.836178 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:01.836532 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:01.836591 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:02.336582 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:02.336658 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:02.336989 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:02.836525 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:02.836602 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:02.836897 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:03.336270 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:03.336349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:03.336732 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:03.836448 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:03.836526 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:03.836864 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:03.836920 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:04.336555 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:04.336630 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:04.336888 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:04.836543 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:04.836644 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:04.836971 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:05.336771 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:05.336847 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:05.337186 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:05.836528 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:05.836603 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:05.836858 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:06.336901 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:06.336978 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:06.337275 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:06.337322 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:06.836616 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:06.836698 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:06.837028 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:07.336511 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:07.336579 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:07.336834 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:07.836242 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:07.836317 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:07.836644 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:08.336229 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:08.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:08.336668 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:08.482933 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 14:51:08.546772 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:51:08.546820 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:51:08.546914 1297065 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 14:51:08.836114 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:08.836184 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:08.836454 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:08.836495 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:09.336176 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:09.336252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:09.336597 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:09.836346 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:09.836424 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:09.836727 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:10.336174 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:10.336265 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:10.336548 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:10.836166 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:10.836272 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:10.836571 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:10.836621 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:11.336180 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:11.336265 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:11.336582 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:11.836217 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:11.836300 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:11.836626 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:12.336568 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:12.336663 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:12.336970 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:12.836801 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:12.836879 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:12.837247 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:12.837301 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:13.336980 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:13.337062 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:13.337320 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:13.837125 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:13.837211 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:13.837540 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:14.336301 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:14.336390 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:14.336757 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:14.836166 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:14.836241 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:14.836499 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:15.336228 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:15.336300 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:15.336648 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:15.336706 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:15.836385 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:15.836461 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:15.836787 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:16.336816 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:16.336889 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:16.337169 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:16.836948 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:16.837028 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:16.837350 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:17.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:17.336319 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:17.336641 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:17.836172 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:17.836252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:17.836555 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:17.836606 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:18.336236 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:18.336313 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:18.336641 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:18.836373 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:18.836452 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:18.836760 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:19.336167 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:19.336238 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:19.336538 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:19.836221 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:19.836297 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:19.836617 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:19.836675 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:20.336339 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:20.336412 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:20.336771 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:20.836180 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:20.836251 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:20.836567 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:21.336259 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:21.336338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:21.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:21.836380 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:21.836462 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:21.836800 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:21.836855 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:22.336522 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:22.336591 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:22.336867 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:22.836547 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:22.836626 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:22.836957 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:23.336750 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:23.336825 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:23.337141 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:23.836507 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:23.836582 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:23.836841 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:23.836883 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:24.336607 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:24.336681 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:24.337016 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:24.836840 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:24.836916 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:24.837240 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:25.336547 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:25.336619 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:25.336933 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:25.836630 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:25.836712 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:25.837049 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:25.837104 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:26.337004 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:26.337079 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:26.337406 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:26.836128 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:26.836203 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:26.836467 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:27.336400 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:27.336476 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:27.336833 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:27.836240 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:27.836325 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:27.836680 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:28.336379 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:28.336452 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:28.336710 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:28.336750 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:28.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:28.836333 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:28.836661 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:29.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:29.336331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:29.336705 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:29.836347 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:29.836422 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:29.836690 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:30.336280 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:30.336351 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:30.336706 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:30.836402 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:30.836499 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:30.836836 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:30.836891 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:30.863046 1297065 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 14:51:30.922204 1297065 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:51:30.922247 1297065 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 14:51:30.922363 1297065 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 14:51:30.925463 1297065 out.go:179] * Enabled addons: 
	I1213 14:51:30.929007 1297065 addons.go:530] duration metric: took 1m54.593151344s for enable addons: enabled=[]
	I1213 14:51:31.336478 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:31.336574 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:31.336911 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:31.836663 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:31.836742 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:31.837400 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:32.336285 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:32.336362 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:32.337832 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1213 14:51:32.836218 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:32.836313 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:32.836623 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:33.336237 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:33.336311 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:33.336634 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:33.336688 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:33.836221 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:33.836296 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:33.836630 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:34.336182 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:34.336256 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:34.336569 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:34.836237 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:34.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:34.836646 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:35.336246 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:35.336327 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:35.336677 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:35.336739 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:35.836381 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:35.836450 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:35.836754 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:36.336847 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:36.336928 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:36.337255 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:36.836533 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:36.836613 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:36.836939 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:37.336507 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:37.336573 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:37.336839 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:37.336879 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:37.836228 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:37.836301 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:37.836594 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:38.336263 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:38.336341 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:38.336675 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:38.836200 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:38.836285 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:38.836811 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:39.336276 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:39.336373 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:39.336728 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:39.836259 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:39.836349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:39.836684 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:39.836742 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:40.336215 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:40.336295 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:40.336618 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:40.836435 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:40.836524 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:40.836905 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:41.336775 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:41.336851 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:41.337175 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:41.836559 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:41.836631 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:41.836894 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:41.836936 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:42.336658 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:42.336748 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:42.337128 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:42.836909 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:42.836987 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:42.837289 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:43.337040 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:43.337127 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:43.337474 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:43.836192 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:43.836275 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:43.836624 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:44.336291 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:44.336388 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:44.336784 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:44.336841 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:44.836180 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:44.836254 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:44.836551 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:45.336321 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:45.336400 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:45.336690 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:45.836435 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:45.836510 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:45.836833 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:46.336779 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:46.336848 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:46.337141 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:46.337201 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:46.836518 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:46.836596 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:46.836935 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:47.336893 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:47.336971 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:47.337308 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:47.836529 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:47.836614 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:47.836876 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:48.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:48.336337 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:48.336692 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:48.836415 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:48.836494 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:48.836834 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:48.836892 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:49.336543 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:49.336621 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:49.336897 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:49.836323 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:49.836400 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:49.836703 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:50.336284 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:50.336361 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:50.336695 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:50.836346 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:50.836424 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:50.836742 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:51.336225 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:51.336303 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:51.336650 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:51.336709 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:51.836373 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:51.836452 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:51.836792 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:52.336469 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:52.336538 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:52.336793 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:52.836252 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:52.836345 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:52.836678 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:53.336269 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:53.336349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:53.336675 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:53.336740 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:53.836126 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:53.836205 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:53.836462 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:54.336204 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:54.336277 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:54.336594 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:54.836224 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:54.836301 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:54.836659 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:55.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:55.336331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:55.336616 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:55.836312 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:55.836389 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:55.836728 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:55.836782 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:56.336654 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:56.336732 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:56.337071 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:56.836525 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:56.836605 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:56.836887 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:57.336719 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:57.336796 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:57.337143 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:57.836841 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:57.836920 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:57.837247 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:51:57.837302 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:51:58.337040 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:58.337110 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:58.337364 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:58.837119 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:58.837198 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:58.837538 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:59.336279 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:59.336367 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:59.336734 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:51:59.836438 1297065 type.go:168] "Request Body" body=""
	I1213 14:51:59.836511 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:51:59.836774 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:00.355395 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:00.355523 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:00.355852 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:00.355945 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:00.836731 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:00.836813 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:00.837145 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:01.336514 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:01.336595 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:01.336898 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:01.836757 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:01.836832 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:01.837174 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:02.336946 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:02.337023 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:02.337363 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:02.836523 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:02.836599 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:02.836906 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:02.836965 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:03.336199 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:03.336271 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:03.336598 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:03.836313 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:03.836395 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:03.836725 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:04.336141 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:04.336218 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:04.336472 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:04.836200 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:04.836276 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:04.836590 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:05.336247 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:05.336324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:05.336654 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:05.336712 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:05.836185 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:05.836257 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:05.836570 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:06.336596 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:06.336670 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:06.337028 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:06.836851 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:06.836932 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:06.837278 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:07.337039 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:07.337104 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:07.337364 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:07.337404 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:07.837188 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:07.837264 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:07.837630 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:08.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:08.336324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:08.336644 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:08.836195 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:08.836269 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:08.836588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:09.336284 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:09.336374 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:09.336691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:09.836403 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:09.836488 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:09.836831 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:09.836885 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:10.336187 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:10.336264 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:10.336588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:10.836238 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:10.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:10.836652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:11.336250 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:11.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:11.336683 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:11.836362 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:11.836437 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:11.836693 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:12.336616 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:12.336691 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:12.337039 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:12.337098 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:12.836854 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:12.836931 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:12.837269 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:13.337012 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:13.337077 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:13.337331 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:13.837136 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:13.837214 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:13.837562 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:14.336275 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:14.336353 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:14.336653 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:14.836184 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:14.836257 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:14.836550 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:14.836598 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:15.336241 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:15.336321 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:15.336652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:15.836388 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:15.836477 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:15.836811 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:16.336837 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:16.336907 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:16.337175 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:16.836969 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:16.837065 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:16.837433 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:16.837491 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:17.336248 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:17.336323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:17.336684 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:17.836232 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:17.836298 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:17.836601 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:18.336212 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:18.336289 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:18.336616 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:18.836403 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:18.836489 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:18.836838 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:19.336195 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:19.336269 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:19.336594 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:19.336650 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:19.836346 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:19.836429 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:19.836796 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:20.336507 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:20.336589 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:20.336899 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:20.836181 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:20.836258 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:20.836517 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:21.336228 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:21.336302 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:21.336632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:21.336692 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:21.836262 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:21.836338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:21.836657 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:22.336526 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:22.336604 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:22.336882 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:22.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:22.836317 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:22.836632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:23.336264 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:23.336373 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:23.336709 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:23.336768 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:23.836406 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:23.836477 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:23.836791 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:24.336241 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:24.336336 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:24.336674 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:24.836391 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:24.836474 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:24.836800 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:25.336188 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:25.336256 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:25.336595 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:25.836360 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:25.836444 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:25.836782 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:25.836842 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:26.336659 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:26.336742 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:26.337133 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:26.836533 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:26.836602 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:26.836915 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:27.336718 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:27.336789 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:27.337149 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:27.836949 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:27.837024 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:27.837383 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:27.837440 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:28.337164 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:28.337233 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:28.337486 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:28.836203 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:28.836284 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:28.836624 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:29.336359 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:29.336444 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:29.336786 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:29.836473 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:29.836542 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:29.836800 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:30.336266 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:30.336346 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:30.336712 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:30.336778 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:30.836448 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:30.836530 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:30.836895 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:31.336594 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:31.336667 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:31.336934 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:31.836251 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:31.836334 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:31.836670 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:32.336445 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:32.336545 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:32.336826 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:32.336874 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:32.836183 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:32.836254 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:32.836608 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:33.336221 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:33.336296 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:33.336654 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:33.836240 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:33.836319 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:33.836658 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:34.336330 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:34.336399 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:34.336664 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:34.836346 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:34.836426 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:34.836772 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:34.836831 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:35.336328 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:35.336410 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:35.336774 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:35.836188 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:35.836261 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:35.836582 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:36.336650 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:36.336733 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:36.337068 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:36.836880 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:36.836955 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:36.837277 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:36.837337 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:37.336192 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:37.336266 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:37.336525 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:37.836226 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:37.836309 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:37.836638 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:38.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:38.336322 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:38.336672 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:38.836202 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:38.836273 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:38.836547 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:39.336280 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:39.336358 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:39.336701 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:39.336754 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:39.836426 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:39.836508 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:39.836821 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:40.336191 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:40.336263 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:40.336564 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:40.836260 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:40.836361 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:40.836721 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:41.336424 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:41.336505 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:41.336831 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:41.336888 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:41.836209 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:41.836299 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:41.836584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:42.336696 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:42.336785 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:42.337191 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:42.836996 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:42.837071 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:42.837403 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:43.336118 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:43.336196 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:43.336449 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:43.836158 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:43.836243 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:43.836549 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:43.836602 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:44.336242 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:44.336324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:44.336613 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:44.836191 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:44.836266 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:44.836521 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:45.336296 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:45.336373 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:45.336712 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:45.836269 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:45.836353 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:45.836712 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:45.836772 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:46.336576 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:46.336646 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:46.336952 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:46.836262 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:46.836354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:46.836668 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:47.336578 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:47.336658 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:47.336990 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:47.836518 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:47.836598 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:47.836865 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:47.836918 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:48.336636 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:48.336714 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:48.337035 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:48.836837 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:48.836909 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:48.837235 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:49.336532 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:49.336609 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:49.336874 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:49.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:49.836320 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:49.836663 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:50.336257 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:50.336343 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:50.336683 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:50.336737 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:50.836244 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:50.836318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:50.836588 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:51.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:51.336340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:51.336658 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:51.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:51.836323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:51.836652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:52.336454 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:52.336534 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:52.336820 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:52.336867 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:52.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:52.836323 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:52.836674 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:53.336385 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:53.336470 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:53.336787 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:53.836193 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:53.836269 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:53.836583 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:54.336266 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:54.336348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:54.336708 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:54.836271 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:54.836348 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:54.836719 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:54.836775 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:55.336414 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:55.336481 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:55.336738 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:55.836424 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:55.836502 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:55.836840 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:56.336926 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:56.337006 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:56.337393 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:56.837161 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:56.837240 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:56.837514 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:56.837556 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:57.336486 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:57.336562 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:57.336904 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:57.836247 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:57.836326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:57.836646 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:58.336169 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:58.336261 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:58.336585 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:58.836253 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:58.836338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:58.836686 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:52:59.336405 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:59.336488 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:59.336818 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:52:59.336881 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:52:59.836205 1297065 type.go:168] "Request Body" body=""
	I1213 14:52:59.836279 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:52:59.836602 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:00.336348 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:00.336434 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:00.336755 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:00.836458 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:00.836538 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:00.836919 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:01.336481 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:01.336559 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:01.336870 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:01.336917 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:01.836269 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:01.836349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:01.836651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:02.336510 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:02.336585 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:02.336875 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:02.836559 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:02.836633 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:02.836887 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:03.336237 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:03.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:03.336652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:03.836262 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:03.836343 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:03.836681 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:03.836743 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:04.336178 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:04.336263 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:04.336579 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:04.836312 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:04.836395 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:04.836768 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:05.336328 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:05.336405 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:05.336722 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:05.836169 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:05.836249 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:05.836532 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:06.337061 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:06.337133 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:06.337448 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:06.337510 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:06.836170 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:06.836312 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:06.836648 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:07.336505 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:07.336579 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:07.336834 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:07.836162 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:07.836243 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:07.836604 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:08.336414 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:08.336492 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:08.336833 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:08.836389 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:08.836459 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:08.836768 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:08.836825 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:09.336249 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:09.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:09.336671 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:09.836385 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:09.836463 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:09.836810 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:10.336515 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:10.336589 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:10.336857 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:10.836237 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:10.836310 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:10.836668 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:11.336409 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:11.336502 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:11.336898 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:11.336954 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:11.836193 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:11.836273 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:11.836553 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:12.336497 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:12.336582 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:12.336916 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:12.836273 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:12.836346 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:12.836677 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:13.336363 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:13.336435 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:13.336727 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:13.836260 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:13.836336 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:13.836645 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:13.836693 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:14.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:14.336320 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:14.336647 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:14.836180 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:14.836273 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:14.836579 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:15.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:15.336342 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:15.336712 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:15.836446 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:15.836528 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:15.836857 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:15.836911 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:16.336886 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:16.336953 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:16.337211 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:16.836970 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:16.837043 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:16.837375 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:17.336898 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:17.336971 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:17.337298 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:17.837031 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:17.837110 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:17.837392 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:17.837435 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:18.336966 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:18.337049 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:18.337376 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:18.837166 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:18.837253 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:18.837689 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:19.336193 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:19.336283 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:19.336617 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:19.836252 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:19.836332 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:19.836666 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:20.336399 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:20.336476 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:20.336824 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:20.336877 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:20.836528 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:20.836607 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:20.836879 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:21.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:21.336319 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:21.336627 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:21.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:21.836331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:21.836682 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:22.336425 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:22.336492 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:22.336751 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:22.836221 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:22.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:22.836646 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:22.836701 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:23.336413 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:23.336491 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:23.336832 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:23.836195 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:23.836282 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:23.836590 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:24.336251 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:24.336331 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:24.336671 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:24.836249 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:24.836325 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:24.836645 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:25.336331 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:25.336403 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:25.336743 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:25.336792 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:25.836245 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:25.836324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:25.836644 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:26.336605 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:26.336680 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:26.337038 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:26.836509 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:26.836578 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:26.836824 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:27.336452 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:27.336525 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:27.336887 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:27.336942 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:27.836486 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:27.836568 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:27.836917 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:28.336112 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:28.336186 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:28.336563 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:28.836282 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:28.836357 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:28.836713 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:29.336309 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:29.336384 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:29.336723 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:29.836403 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:29.836478 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:29.836733 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:29.836776 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:30.336220 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:30.336298 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:30.336637 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:30.836357 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:30.836431 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:30.836763 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:31.336439 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:31.336532 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:31.336820 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:31.836503 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:31.836582 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:31.836898 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:31.836954 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:32.336893 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:32.336969 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:32.337280 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:32.837017 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:32.837102 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:32.837392 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:33.336206 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:33.336289 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:33.336624 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:33.836226 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:33.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:33.836661 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:34.336143 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:34.336223 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:34.336515 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:34.336566 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:34.836223 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:34.836309 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:34.836660 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:35.336255 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:35.336337 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:35.336768 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:35.836351 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:35.836427 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:35.836683 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:36.336777 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:36.336851 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:36.337168 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:36.337222 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:36.837003 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:36.837084 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:36.837449 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:37.336375 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:37.336445 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:37.336691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:37.836402 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:37.836479 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:37.836826 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:38.336440 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:38.336525 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:38.336860 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:38.836259 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:38.836339 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:38.836606 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:38.836659 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:39.336506 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:39.336596 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:39.337235 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:39.836335 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:39.836421 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:39.836794 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:40.336522 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:40.336587 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:40.336888 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:40.836592 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:40.836674 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:40.837021 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:40.837076 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:41.336578 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:41.336655 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:41.336975 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:41.836525 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:41.836604 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:41.836959 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:42.336767 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:42.336851 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:42.337172 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:42.836977 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:42.837055 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:42.837406 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:42.837463 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:43.336096 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:43.336165 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:43.336522 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:43.836216 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:43.836315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:43.836677 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:44.336275 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:44.336366 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:44.336718 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:44.836181 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:44.836246 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:44.836531 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:45.336294 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:45.336384 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:45.336759 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:45.336815 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:45.836495 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:45.836571 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:45.836902 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:46.336923 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:46.336991 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:46.337248 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:46.836581 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:46.836658 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:46.836955 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:47.336876 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:47.336959 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:47.337291 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:47.337349 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:47.837127 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:47.837195 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:47.837512 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:48.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:48.336347 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:48.336704 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:48.836213 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:48.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:48.836647 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:49.336183 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:49.336258 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:49.336584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:49.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:49.836330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:49.836652 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:49.836707 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:50.336396 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:50.336475 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:50.336802 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:50.836181 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:50.836252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:50.836524 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:51.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:51.336340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:51.336661 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:51.836254 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:51.836339 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:51.836673 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:51.836731 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:52.336439 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:52.336508 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:52.336813 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:52.836552 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:52.836646 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:52.837037 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:53.336867 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:53.336943 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:53.337248 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:53.836529 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:53.836600 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:53.836882 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:53.836925 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:54.336730 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:54.336804 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:54.337142 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:54.836954 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:54.837030 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:54.837375 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:55.337104 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:55.337186 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:55.337475 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:55.836190 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:55.836268 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:55.836616 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:56.336432 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:56.336515 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:56.336847 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:56.336900 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:56.836181 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:56.836260 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:56.836553 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:57.336500 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:57.336575 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:57.336934 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:57.836737 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:57.836827 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:57.837184 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:58.336550 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:58.336646 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:58.336966 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:53:58.337018 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:53:58.836741 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:58.836828 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:58.837162 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:59.336945 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:59.337026 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:59.337378 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:53:59.836973 1297065 type.go:168] "Request Body" body=""
	I1213 14:53:59.837043 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:53:59.837302 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:00.337185 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:00.337285 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:00.337926 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:00.338025 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:00.836242 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:00.836326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:00.836691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:01.336224 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:01.336316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:01.336589 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:01.836224 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:01.836314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:01.836636 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:02.336530 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:02.336607 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:02.336904 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:02.836600 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:02.836677 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:02.837015 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:02.837082 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:03.336835 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:03.336910 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:03.337276 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:03.837094 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:03.837170 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:03.837522 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:04.336212 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:04.336283 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:04.336559 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:04.836246 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:04.836354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:04.836699 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:05.336254 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:05.336341 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:05.336691 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:05.336745 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:05.836209 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:05.836288 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:05.836622 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:06.336695 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:06.336783 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:06.337108 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:06.836892 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:06.836966 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:06.837308 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:07.336123 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:07.336192 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:07.336465 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:07.836757 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:07.836832 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:07.837160 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:07.837217 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:08.336959 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:08.337035 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:08.337354 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:08.837047 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:08.837117 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:08.837375 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:09.336797 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:09.336876 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:09.337176 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:09.836976 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:09.837060 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:09.837357 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:09.837405 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:10.337145 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:10.337219 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:10.337522 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:10.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:10.836324 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:10.836624 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:11.336249 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:11.336335 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:11.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:11.836251 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:11.836329 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:11.836584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:12.336557 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:12.336629 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:12.336964 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:12.337021 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:12.836792 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:12.836867 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:12.837180 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:13.336499 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:13.336580 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:13.336912 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:13.836538 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:13.836617 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:13.836932 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:14.336207 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:14.336299 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:14.336627 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:14.836329 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:14.836410 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:14.836729 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:14.836786 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:15.336238 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:15.336371 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:15.336658 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:15.836347 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:15.836425 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:15.836765 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:16.336570 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:16.336641 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:16.336896 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:16.836234 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:16.836313 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:16.836636 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:17.336499 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:17.336578 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:17.336890 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:17.336950 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:17.836161 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:17.836245 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:17.836561 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:18.336250 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:18.336345 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:18.336673 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:18.836422 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:18.836499 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:18.836856 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:19.336539 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:19.336609 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:19.336871 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:19.836236 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:19.836313 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:19.836657 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:19.836712 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:20.336398 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:20.336479 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:20.336829 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:20.836244 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:20.836336 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:20.836642 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:21.336253 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:21.336338 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:21.336707 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:21.836309 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:21.836398 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:21.836758 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:21.836814 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:22.336543 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:22.336624 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:22.336925 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:22.836625 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:22.836707 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:22.837057 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:23.336641 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:23.336724 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:23.337073 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:23.836556 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:23.836634 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:23.836903 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:23.836945 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:24.336275 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:24.336357 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:24.336645 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:24.836238 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:24.836349 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:24.836732 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:25.336455 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:25.336529 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:25.336850 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:25.836272 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:25.836354 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:25.836679 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:26.336762 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:26.336843 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:26.337194 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:26.337248 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:26.836530 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:26.836634 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:26.836949 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:27.337082 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:27.337168 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:27.337523 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:27.836265 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:27.836347 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:27.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:28.336192 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:28.336265 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:28.336562 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:28.836214 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:28.836315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:28.836676 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:28.836744 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:29.336414 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:29.336563 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:29.336947 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:29.836198 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:29.836288 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:29.836614 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:30.336242 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:30.336318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:30.336656 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:30.836210 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:30.836286 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:30.836612 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:31.336280 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:31.336362 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:31.336639 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:31.336684 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:31.836265 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:31.836343 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:31.836692 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:32.336488 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:32.336567 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:32.336863 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:32.836173 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:32.836265 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:32.836578 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:33.336248 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:33.336322 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:33.336687 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:33.336748 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:33.836273 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:33.836362 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:33.836704 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:34.336406 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:34.336478 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:34.336748 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:34.836467 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:34.836551 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:34.836857 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:35.336588 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:35.336668 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:35.337027 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:35.337086 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:35.836514 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:35.836585 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:35.836913 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:36.336967 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:36.337041 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:36.337376 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:36.837202 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:36.837285 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:36.837584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:37.336502 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:37.336591 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:37.336896 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:37.836591 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:37.836694 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:37.837046 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:37.837115 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:38.336899 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:38.336971 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:38.337328 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:38.837050 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:38.837126 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:38.837404 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:39.336160 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:39.336232 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:39.336580 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:39.836289 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:39.836382 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:39.836703 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:40.336371 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:40.336443 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:40.336707 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:40.336759 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:40.836239 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:40.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:40.836655 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:41.336240 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:41.336320 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:41.336686 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:41.836231 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:41.836305 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:41.836611 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:42.336623 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:42.336717 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:42.337080 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:42.337132 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:42.836776 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:42.836862 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:42.837178 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:43.336512 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:43.336586 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:43.336846 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:43.836233 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:43.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:43.836685 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:44.336266 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:44.336339 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:44.336641 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:44.836273 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:44.836339 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:44.836623 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:44.836676 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:45.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:45.336326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:45.336676 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:45.836517 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:45.836597 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:45.836920 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:46.336894 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:46.336967 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:46.337224 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:46.837014 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:46.837094 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:46.837437 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:46.837490 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:47.336253 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:47.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:47.336670 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:47.836235 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:47.836302 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:47.836610 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:48.336235 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:48.336318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:48.336671 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:48.836257 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:48.836337 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:48.836678 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:49.336349 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:49.336431 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:49.336732 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:49.336821 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:49.836214 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:49.836293 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:49.836634 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:50.336226 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:50.336307 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:50.336635 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:50.836333 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:50.836410 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:50.836688 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:51.336241 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:51.336326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:51.336678 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:51.836396 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:51.836479 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:51.836771 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:51.836817 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:52.336516 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:52.336593 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:52.336852 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:52.836252 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:52.836340 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:52.836773 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:53.336515 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:53.336595 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:53.336935 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:53.836510 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:53.836587 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:53.836851 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:53.836896 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:54.336367 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:54.336467 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:54.336808 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:54.836267 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:54.836343 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:54.836686 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:55.336171 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:55.336242 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:55.336562 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:55.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:55.836320 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:55.836689 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:56.336624 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:56.336725 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:56.337092 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:56.337153 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:56.836464 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:56.836539 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:56.836794 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:57.336513 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:57.336595 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:57.336897 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:57.836221 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:57.836300 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:57.836664 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:58.336100 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:58.336175 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:58.336496 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:58.836220 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:58.836305 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:58.836646 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:54:58.836706 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:54:59.336458 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:59.336535 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:59.336905 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:54:59.836288 1297065 type.go:168] "Request Body" body=""
	I1213 14:54:59.836366 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:54:59.836722 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:00.336435 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:00.336516 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:00.336842 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:00.836803 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:00.836881 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:00.837232 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:00.837290 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:01.336546 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:01.336620 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:01.336919 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:01.836631 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:01.836717 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:01.837061 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:02.336921 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:02.337000 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:02.337379 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:02.837188 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:02.837257 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:02.837522 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:02.837565 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:03.336219 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:03.336301 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:03.336654 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:03.836248 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:03.836325 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:03.836635 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:04.336178 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:04.336251 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:04.336567 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:04.836232 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:04.836318 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:04.836669 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:05.336242 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:05.336317 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:05.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:05.336713 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:05.836366 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:05.836448 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:05.836735 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:06.336637 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:06.336720 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:06.337074 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:06.836743 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:06.836817 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:06.837172 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:07.336998 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:07.337074 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:07.337343 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:07.337395 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:07.837167 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:07.837242 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:07.837584 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:08.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:08.336315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:08.336647 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:08.836178 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:08.836256 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:08.836598 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:09.336224 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:09.336297 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:09.336632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:09.836244 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:09.836321 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:09.836675 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:09.836731 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:10.336173 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:10.336248 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:10.336521 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:10.836203 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:10.836283 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:10.836642 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:11.336345 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:11.336437 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:11.336802 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:11.836493 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:11.836567 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:11.836846 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:11.836897 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:12.336745 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:12.336822 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:12.337164 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:12.836822 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:12.836903 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:12.837329 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:13.337068 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:13.337137 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:13.337477 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:13.836207 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:13.836286 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:13.836668 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:14.336241 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:14.336327 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:14.336632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:14.336679 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:14.836300 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:14.836375 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:14.836649 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:15.336258 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:15.336332 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:15.336669 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:15.836251 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:15.836333 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:15.836683 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:16.336651 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:16.336729 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:16.337093 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:16.337145 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:16.836909 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:16.836992 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:16.837356 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:17.336137 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:17.336212 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:17.336571 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:17.836247 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:17.836315 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:17.836610 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:18.336244 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:18.336326 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:18.336651 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:18.836230 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:18.836305 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:18.836647 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:18.836705 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:19.336364 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:19.336454 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:19.336797 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:19.836215 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:19.836288 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:19.836625 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:20.336325 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:20.336403 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:20.336754 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:20.836274 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:20.836352 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:20.836687 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:20.836744 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:21.336273 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:21.336350 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:21.336700 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:21.836398 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:21.836482 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:21.836816 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:22.336510 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:22.336583 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:22.336841 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:22.836211 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:22.836292 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:22.836650 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:23.336238 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:23.336314 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:23.336696 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:23.336754 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:23.836429 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:23.836502 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:23.836789 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:24.336496 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:24.336580 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:24.336961 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:24.836574 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:24.836650 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:24.836988 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:25.336500 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:25.336566 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:25.336817 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:25.336861 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:25.836628 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:25.836709 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:25.837047 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:26.337039 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:26.337121 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:26.337470 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:26.836162 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:26.836244 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:26.836581 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:27.336591 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:27.336672 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:27.337011 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:27.337065 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:27.836601 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:27.836681 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:27.837000 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:28.336523 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:28.336604 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:28.336874 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:28.836243 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:28.836316 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:28.836662 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:29.336385 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:29.336497 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:29.336839 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:29.836183 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:29.836257 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:29.836558 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:29.836608 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:30.336289 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:30.336367 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:30.336681 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:30.836223 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:30.836310 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:30.836632 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:31.336179 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:31.336247 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:31.336520 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:31.836214 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:31.836288 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:31.836631 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:31.836685 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:32.336406 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:32.336490 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:32.336839 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:32.836185 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:32.836252 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:32.836552 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:33.336245 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:33.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:33.336778 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:33.836367 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:33.836492 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:33.836841 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:33.836899 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:34.336602 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:34.336672 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:34.336962 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:34.836466 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:34.836542 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:34.836843 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:35.336251 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:35.336330 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:35.336690 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:35.836215 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:35.836283 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:35.836600 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:36.336641 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:36.336716 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:36.337095 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 14:55:36.337155 1297065 node_ready.go:55] error getting node "functional-562018" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-562018": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 14:55:36.836776 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:36.836857 1297065 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-562018" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 14:55:36.837203 1297065 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 14:55:37.337030 1297065 type.go:168] "Request Body" body=""
	I1213 14:55:37.337151 1297065 node_ready.go:38] duration metric: took 6m0.001157945s for node "functional-562018" to be "Ready" ...
	I1213 14:55:37.340291 1297065 out.go:203] 
	W1213 14:55:37.343143 1297065 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 14:55:37.343162 1297065 out.go:285] * 
	W1213 14:55:37.345311 1297065 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 14:55:37.348302 1297065 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 14:55:44 functional-562018 containerd[5205]: time="2025-12-13T14:55:44.662059921Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 14:55:45 functional-562018 containerd[5205]: time="2025-12-13T14:55:45.818542030Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 13 14:55:45 functional-562018 containerd[5205]: time="2025-12-13T14:55:45.820720139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 13 14:55:45 functional-562018 containerd[5205]: time="2025-12-13T14:55:45.827533213Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 14:55:45 functional-562018 containerd[5205]: time="2025-12-13T14:55:45.827946488Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 14:55:46 functional-562018 containerd[5205]: time="2025-12-13T14:55:46.766575667Z" level=info msg="No images store for sha256:3e1817b2097897bb33703eb5a3a650e117d1a4379ef0e281fcf78680554b6f9d"
	Dec 13 14:55:46 functional-562018 containerd[5205]: time="2025-12-13T14:55:46.768780549Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-562018\""
	Dec 13 14:55:46 functional-562018 containerd[5205]: time="2025-12-13T14:55:46.775718289Z" level=info msg="ImageCreate event name:\"sha256:e026052059b45d788f94e5aa4af0bc6e32bbfa2d449adbca80836f551dadd042\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 14:55:46 functional-562018 containerd[5205]: time="2025-12-13T14:55:46.776342439Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-562018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 14:55:47 functional-562018 containerd[5205]: time="2025-12-13T14:55:47.577118370Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 13 14:55:47 functional-562018 containerd[5205]: time="2025-12-13T14:55:47.579725163Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 13 14:55:47 functional-562018 containerd[5205]: time="2025-12-13T14:55:47.581600722Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 13 14:55:47 functional-562018 containerd[5205]: time="2025-12-13T14:55:47.593674355Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 13 14:55:48 functional-562018 containerd[5205]: time="2025-12-13T14:55:48.503955591Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 13 14:55:48 functional-562018 containerd[5205]: time="2025-12-13T14:55:48.506314808Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 13 14:55:48 functional-562018 containerd[5205]: time="2025-12-13T14:55:48.508621447Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 13 14:55:48 functional-562018 containerd[5205]: time="2025-12-13T14:55:48.524501960Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 13 14:55:48 functional-562018 containerd[5205]: time="2025-12-13T14:55:48.692540483Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 13 14:55:48 functional-562018 containerd[5205]: time="2025-12-13T14:55:48.694713603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 13 14:55:48 functional-562018 containerd[5205]: time="2025-12-13T14:55:48.701499321Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 14:55:48 functional-562018 containerd[5205]: time="2025-12-13T14:55:48.701853086Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 14:55:48 functional-562018 containerd[5205]: time="2025-12-13T14:55:48.826201242Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 13 14:55:48 functional-562018 containerd[5205]: time="2025-12-13T14:55:48.828373279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 13 14:55:48 functional-562018 containerd[5205]: time="2025-12-13T14:55:48.836071264Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 14:55:48 functional-562018 containerd[5205]: time="2025-12-13T14:55:48.836725641Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:55:52.784965    9352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:55:52.785679    9352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:55:52.786596    9352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:55:52.788327    9352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:55:52.788920    9352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.759368] overlayfs: idmapped layers are currently not supported
	[Dec13 13:37] overlayfs: idmapped layers are currently not supported
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 14:55:52 up  6:38,  0 user,  load average: 0.49, 0.33, 0.76
	Linux functional-562018 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 14:55:49 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 14:55:50 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 13 14:55:50 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:50 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:50 functional-562018 kubelet[9130]: E1213 14:55:50.150060    9130 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 14:55:50 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 14:55:50 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 14:55:50 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 13 14:55:50 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:50 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:50 functional-562018 kubelet[9227]: E1213 14:55:50.895519    9227 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 14:55:50 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 14:55:50 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 14:55:51 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 828.
	Dec 13 14:55:51 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:51 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:51 functional-562018 kubelet[9248]: E1213 14:55:51.656787    9248 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 14:55:51 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 14:55:51 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 14:55:52 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 829.
	Dec 13 14:55:52 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:52 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 14:55:52 functional-562018 kubelet[9268]: E1213 14:55:52.408347    9268 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 14:55:52 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 14:55:52 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018: exit status 2 (317.893443ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-562018" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (735.79s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-562018 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1213 14:58:42.552439 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:00:18.171552 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:01:41.242072 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:03:42.552408 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:05:18.175373 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-562018 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m13.612112566s)

                                                
                                                
-- stdout --
	* [functional-562018] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-562018" primary control-plane node in "functional-562018" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000252344s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000225024s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000225024s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-562018 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m13.613337313s for "functional-562018" cluster.
I1213 15:08:07.286814 1252934 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-562018
helpers_test.go:244: (dbg) docker inspect functional-562018:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
	        "Created": "2025-12-13T14:41:15.451086653Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1291703,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T14:41:15.527927053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hostname",
	        "HostsPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hosts",
	        "LogPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648-json.log",
	        "Name": "/functional-562018",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-562018:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-562018",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
	                "LowerDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-562018",
	                "Source": "/var/lib/docker/volumes/functional-562018/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-562018",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-562018",
	                "name.minikube.sigs.k8s.io": "functional-562018",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f4b22297a29553cdd0dbc4eaa766abcdb1e67465ee18f1e2f5f9917dc8cf6d08",
	            "SandboxKey": "/var/run/docker/netns/f4b22297a295",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33918"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33919"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33922"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33920"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33921"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-562018": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:f3:95:ff:30:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd2e7aed753870546b4bccdeed8073ad5795c14334e906d06b964f72dc448c38",
	                    "EndpointID": "a0e947c1e40f773105c811b67b7d1d63f19d3a20060380bbde944bf9bfe39be5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-562018",
	                        "2cd1277ca783"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018: exit status 2 (301.685294ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-831661 image ls --format short --alsologtostderr                                                                                             │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image   │ functional-831661 image ls --format table --alsologtostderr                                                                                             │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ ssh     │ functional-831661 ssh pgrep buildkitd                                                                                                                   │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │                     │
	│ image   │ functional-831661 image ls --format yaml --alsologtostderr                                                                                              │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image   │ functional-831661 image build -t localhost/my-image:functional-831661 testdata/build --alsologtostderr                                                  │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image   │ functional-831661 image ls                                                                                                                              │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ delete  │ -p functional-831661                                                                                                                                    │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ start   │ -p functional-562018 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │                     │
	│ start   │ -p functional-562018 --alsologtostderr -v=8                                                                                                             │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:49 UTC │                     │
	│ cache   │ functional-562018 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ functional-562018 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ functional-562018 cache add registry.k8s.io/pause:latest                                                                                                │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ functional-562018 cache add minikube-local-cache-test:functional-562018                                                                                 │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ functional-562018 cache delete minikube-local-cache-test:functional-562018                                                                              │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ ssh     │ functional-562018 ssh sudo crictl images                                                                                                                │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ ssh     │ functional-562018 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ ssh     │ functional-562018 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │                     │
	│ cache   │ functional-562018 cache reload                                                                                                                          │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ ssh     │ functional-562018 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ kubectl │ functional-562018 kubectl -- --context functional-562018 get pods                                                                                       │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │                     │
	│ start   │ -p functional-562018 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 14:55:53
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 14:55:53.719613 1302865 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:55:53.719728 1302865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:55:53.719732 1302865 out.go:374] Setting ErrFile to fd 2...
	I1213 14:55:53.719735 1302865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:55:53.719985 1302865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 14:55:53.720335 1302865 out.go:368] Setting JSON to false
	I1213 14:55:53.721190 1302865 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23903,"bootTime":1765613851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 14:55:53.721260 1302865 start.go:143] virtualization:  
	I1213 14:55:53.724694 1302865 out.go:179] * [functional-562018] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 14:55:53.728380 1302865 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 14:55:53.728496 1302865 notify.go:221] Checking for updates...
	I1213 14:55:53.734124 1302865 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:55:53.736928 1302865 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:55:53.739728 1302865 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 14:55:53.742545 1302865 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 14:55:53.745302 1302865 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 14:55:53.748618 1302865 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:55:53.748719 1302865 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:55:53.782535 1302865 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 14:55:53.782649 1302865 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:55:53.845662 1302865 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 14:55:53.829246857 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:55:53.845758 1302865 docker.go:319] overlay module found
	I1213 14:55:53.849849 1302865 out.go:179] * Using the docker driver based on existing profile
	I1213 14:55:53.852762 1302865 start.go:309] selected driver: docker
	I1213 14:55:53.852774 1302865 start.go:927] validating driver "docker" against &{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:55:53.852875 1302865 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 14:55:53.852984 1302865 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:55:53.929886 1302865 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 14:55:53.921020705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:55:53.930294 1302865 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 14:55:53.930319 1302865 cni.go:84] Creating CNI manager for ""
	I1213 14:55:53.930367 1302865 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 14:55:53.930406 1302865 start.go:353] cluster config:
	{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:55:53.933662 1302865 out.go:179] * Starting "functional-562018" primary control-plane node in "functional-562018" cluster
	I1213 14:55:53.936743 1302865 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 14:55:53.939760 1302865 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 14:55:53.942676 1302865 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 14:55:53.942716 1302865 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 14:55:53.942732 1302865 cache.go:65] Caching tarball of preloaded images
	I1213 14:55:53.942759 1302865 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 14:55:53.942845 1302865 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 14:55:53.942855 1302865 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 14:55:53.942970 1302865 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/config.json ...
	I1213 14:55:53.962568 1302865 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 14:55:53.962579 1302865 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 14:55:53.962597 1302865 cache.go:243] Successfully downloaded all kic artifacts
	I1213 14:55:53.962628 1302865 start.go:360] acquireMachinesLock for functional-562018: {Name:mk6a7956e4fce5d8e0f4d6fe039ab67ad6cd688b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 14:55:53.962689 1302865 start.go:364] duration metric: took 45.029µs to acquireMachinesLock for "functional-562018"
	I1213 14:55:53.962707 1302865 start.go:96] Skipping create...Using existing machine configuration
	I1213 14:55:53.962711 1302865 fix.go:54] fixHost starting: 
	I1213 14:55:53.962972 1302865 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:55:53.980087 1302865 fix.go:112] recreateIfNeeded on functional-562018: state=Running err=<nil>
	W1213 14:55:53.980106 1302865 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 14:55:53.983261 1302865 out.go:252] * Updating the running docker "functional-562018" container ...
	I1213 14:55:53.983285 1302865 machine.go:94] provisionDockerMachine start ...
	I1213 14:55:53.983388 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.000833 1302865 main.go:143] libmachine: Using SSH client type: native
	I1213 14:55:54.001170 1302865 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:55:54.001177 1302865 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 14:55:54.155013 1302865 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-562018
	
	I1213 14:55:54.155027 1302865 ubuntu.go:182] provisioning hostname "functional-562018"
	I1213 14:55:54.155091 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.172804 1302865 main.go:143] libmachine: Using SSH client type: native
	I1213 14:55:54.173100 1302865 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:55:54.173108 1302865 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-562018 && echo "functional-562018" | sudo tee /etc/hostname
	I1213 14:55:54.335232 1302865 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-562018
	
	I1213 14:55:54.335302 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.353315 1302865 main.go:143] libmachine: Using SSH client type: native
	I1213 14:55:54.353625 1302865 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:55:54.353638 1302865 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-562018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-562018/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-562018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 14:55:54.503602 1302865 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 14:55:54.503618 1302865 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 14:55:54.503648 1302865 ubuntu.go:190] setting up certificates
	I1213 14:55:54.503664 1302865 provision.go:84] configureAuth start
	I1213 14:55:54.503732 1302865 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
	I1213 14:55:54.520737 1302865 provision.go:143] copyHostCerts
	I1213 14:55:54.520806 1302865 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 14:55:54.520813 1302865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 14:55:54.520892 1302865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 14:55:54.520992 1302865 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 14:55:54.520996 1302865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 14:55:54.521022 1302865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 14:55:54.521079 1302865 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 14:55:54.521082 1302865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 14:55:54.521105 1302865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 14:55:54.521157 1302865 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.functional-562018 san=[127.0.0.1 192.168.49.2 functional-562018 localhost minikube]
	I1213 14:55:54.737947 1302865 provision.go:177] copyRemoteCerts
	I1213 14:55:54.738007 1302865 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 14:55:54.738047 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.756271 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:54.864730 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 14:55:54.885080 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 14:55:54.903456 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 14:55:54.921228 1302865 provision.go:87] duration metric: took 417.552003ms to configureAuth
	I1213 14:55:54.921245 1302865 ubuntu.go:206] setting minikube options for container-runtime
	I1213 14:55:54.921445 1302865 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:55:54.921451 1302865 machine.go:97] duration metric: took 938.161957ms to provisionDockerMachine
	I1213 14:55:54.921458 1302865 start.go:293] postStartSetup for "functional-562018" (driver="docker")
	I1213 14:55:54.921469 1302865 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 14:55:54.921526 1302865 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 14:55:54.921569 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.939146 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:55.043619 1302865 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 14:55:55.047116 1302865 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 14:55:55.047136 1302865 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 14:55:55.047147 1302865 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 14:55:55.047201 1302865 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 14:55:55.047279 1302865 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 14:55:55.047377 1302865 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts -> hosts in /etc/test/nested/copy/1252934
	I1213 14:55:55.047422 1302865 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1252934
	I1213 14:55:55.055022 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 14:55:55.072651 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts --> /etc/test/nested/copy/1252934/hosts (40 bytes)
	I1213 14:55:55.090146 1302865 start.go:296] duration metric: took 168.672467ms for postStartSetup
	I1213 14:55:55.090222 1302865 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 14:55:55.090277 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:55.110519 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:55.212743 1302865 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 14:55:55.217665 1302865 fix.go:56] duration metric: took 1.254946074s for fixHost
	I1213 14:55:55.217694 1302865 start.go:83] releasing machines lock for "functional-562018", held for 1.254985507s
	I1213 14:55:55.217771 1302865 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
	I1213 14:55:55.234536 1302865 ssh_runner.go:195] Run: cat /version.json
	I1213 14:55:55.234580 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:55.234841 1302865 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 14:55:55.234904 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:55.258034 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:55.263005 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:55.363489 1302865 ssh_runner.go:195] Run: systemctl --version
	I1213 14:55:55.466608 1302865 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 14:55:55.470983 1302865 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 14:55:55.471044 1302865 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 14:55:55.478685 1302865 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 14:55:55.478700 1302865 start.go:496] detecting cgroup driver to use...
	I1213 14:55:55.478730 1302865 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 14:55:55.478776 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 14:55:55.494349 1302865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 14:55:55.507276 1302865 docker.go:218] disabling cri-docker service (if available) ...
	I1213 14:55:55.507360 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 14:55:55.523374 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 14:55:55.537388 1302865 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 14:55:55.656533 1302865 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 14:55:55.769801 1302865 docker.go:234] disabling docker service ...
	I1213 14:55:55.769857 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 14:55:55.784548 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 14:55:55.797129 1302865 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 14:55:55.915684 1302865 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 14:55:56.027646 1302865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 14:55:56.050399 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 14:55:56.066005 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 14:55:56.076093 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 14:55:56.085556 1302865 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 14:55:56.085627 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 14:55:56.094545 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 14:55:56.104197 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 14:55:56.114269 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 14:55:56.123172 1302865 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 14:55:56.132178 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 14:55:56.141074 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 14:55:56.150470 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 14:55:56.160063 1302865 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 14:55:56.167903 1302865 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 14:55:56.175659 1302865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:55:56.295844 1302865 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 14:55:56.441580 1302865 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 14:55:56.441654 1302865 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 14:55:56.445551 1302865 start.go:564] Will wait 60s for crictl version
	I1213 14:55:56.445607 1302865 ssh_runner.go:195] Run: which crictl
	I1213 14:55:56.449128 1302865 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 14:55:56.473587 1302865 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 14:55:56.473654 1302865 ssh_runner.go:195] Run: containerd --version
	I1213 14:55:56.493885 1302865 ssh_runner.go:195] Run: containerd --version
	I1213 14:55:56.518032 1302865 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 14:55:56.521077 1302865 cli_runner.go:164] Run: docker network inspect functional-562018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 14:55:56.537369 1302865 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 14:55:56.544433 1302865 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 14:55:56.547248 1302865 kubeadm.go:884] updating cluster {Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 14:55:56.547410 1302865 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 14:55:56.547500 1302865 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:55:56.572443 1302865 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 14:55:56.572458 1302865 containerd.go:534] Images already preloaded, skipping extraction
	I1213 14:55:56.572525 1302865 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:55:56.603700 1302865 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 14:55:56.603712 1302865 cache_images.go:86] Images are preloaded, skipping loading
	I1213 14:55:56.603718 1302865 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 14:55:56.603824 1302865 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-562018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 14:55:56.603888 1302865 ssh_runner.go:195] Run: sudo crictl info
	I1213 14:55:56.640969 1302865 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 14:55:56.640988 1302865 cni.go:84] Creating CNI manager for ""
	I1213 14:55:56.640997 1302865 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 14:55:56.641011 1302865 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 14:55:56.641033 1302865 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-562018 NodeName:functional-562018 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 14:55:56.641163 1302865 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-562018"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 14:55:56.641238 1302865 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 14:55:56.649442 1302865 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 14:55:56.649507 1302865 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 14:55:56.657006 1302865 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 14:55:56.669728 1302865 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 14:55:56.682334 1302865 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1213 14:55:56.694926 1302865 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 14:55:56.698838 1302865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:55:56.837238 1302865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:55:57.584722 1302865 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018 for IP: 192.168.49.2
	I1213 14:55:57.584733 1302865 certs.go:195] generating shared ca certs ...
	I1213 14:55:57.584753 1302865 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:55:57.584897 1302865 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 14:55:57.584947 1302865 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 14:55:57.584954 1302865 certs.go:257] generating profile certs ...
	I1213 14:55:57.585039 1302865 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key
	I1213 14:55:57.585090 1302865 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key.d0505aee
	I1213 14:55:57.585124 1302865 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key
	I1213 14:55:57.585235 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 14:55:57.585272 1302865 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 14:55:57.585280 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 14:55:57.585307 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 14:55:57.585330 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 14:55:57.585354 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 14:55:57.585399 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 14:55:57.591362 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 14:55:57.616349 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 14:55:57.635438 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 14:55:57.655371 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 14:55:57.672503 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 14:55:57.689594 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 14:55:57.706530 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 14:55:57.723556 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 14:55:57.740287 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 14:55:57.757304 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 14:55:57.774649 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 14:55:57.792687 1302865 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 14:55:57.805822 1302865 ssh_runner.go:195] Run: openssl version
	I1213 14:55:57.812225 1302865 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 14:55:57.819503 1302865 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 14:55:57.826726 1302865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 14:55:57.830446 1302865 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 14:55:57.830502 1302865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 14:55:57.871253 1302865 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 14:55:57.878814 1302865 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:55:57.886029 1302865 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 14:55:57.893560 1302865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:55:57.897283 1302865 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:55:57.897343 1302865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:55:57.938225 1302865 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 14:55:57.946132 1302865 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 14:55:57.953318 1302865 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 14:55:57.960779 1302865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 14:55:57.964616 1302865 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 14:55:57.964674 1302865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 14:55:58.013928 1302865 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 14:55:58.021993 1302865 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 14:55:58.026144 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 14:55:58.067380 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 14:55:58.114887 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 14:55:58.156572 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 14:55:58.199117 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 14:55:58.241809 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 14:55:58.285184 1302865 kubeadm.go:401] StartCluster: {Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:55:58.285266 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 14:55:58.285327 1302865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 14:55:58.314259 1302865 cri.go:89] found id: ""
	I1213 14:55:58.314322 1302865 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 14:55:58.322386 1302865 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 14:55:58.322396 1302865 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 14:55:58.322453 1302865 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 14:55:58.329880 1302865 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:55:58.330377 1302865 kubeconfig.go:125] found "functional-562018" server: "https://192.168.49.2:8441"
	I1213 14:55:58.331729 1302865 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 14:55:58.341644 1302865 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 14:41:23.876598830 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 14:55:56.689854034 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 14:55:58.341663 1302865 kubeadm.go:1161] stopping kube-system containers ...
	I1213 14:55:58.341678 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1213 14:55:58.341741 1302865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 14:55:58.374972 1302865 cri.go:89] found id: ""
	I1213 14:55:58.375050 1302865 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 14:55:58.396016 1302865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 14:55:58.404525 1302865 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 13 14:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 13 14:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 13 14:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 13 14:45 /etc/kubernetes/scheduler.conf
	
	I1213 14:55:58.404584 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 14:55:58.412946 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 14:55:58.420580 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:55:58.420635 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 14:55:58.428221 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 14:55:58.435971 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:55:58.436028 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 14:55:58.443530 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 14:55:58.451393 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:55:58.451448 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 14:55:58.458854 1302865 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 14:55:58.466605 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:55:58.520413 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:55:59.744405 1302865 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.223964216s)
	I1213 14:55:59.744467 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:55:59.946438 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:56:00.013725 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:56:00.113319 1302865 api_server.go:52] waiting for apiserver process to appear ...
	I1213 14:56:00.114955 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:00.613579 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:01.114177 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:01.613592 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:02.113571 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:02.613593 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:03.113840 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:03.613569 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:04.114249 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:04.613852 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:05.113537 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:05.613696 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:06.113540 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:06.614342 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:07.113785 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:07.613457 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:08.114283 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:08.613596 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:09.113597 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:09.614352 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:10.114532 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:10.613598 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:11.114365 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:11.614158 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:12.113539 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:12.613531 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:13.113591 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:13.613463 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:14.114527 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:14.614435 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:15.113510 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:15.614373 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:16.114388 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:16.613507 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:17.113567 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:17.614369 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:18.113844 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:18.613714 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:19.114404 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:19.614169 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:20.114541 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:20.613650 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:21.113498 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:21.613589 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:22.113591 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:22.613522 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:23.114240 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:23.614475 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:24.113893 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:24.614467 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:25.114531 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:25.613526 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:26.114346 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:26.614504 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:27.113518 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:27.614286 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:28.114181 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:28.613958 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:29.113601 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:29.614343 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:30.114309 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:30.614109 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:31.114271 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:31.613510 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:32.114261 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:32.614199 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:33.114060 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:33.614237 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:34.114371 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:34.614467 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:35.114182 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:35.613614 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:36.113542 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:36.614402 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:37.114233 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:37.613522 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:38.113599 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:38.613584 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:39.114045 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:39.613569 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:40.113521 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:40.613504 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:41.113503 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:41.614239 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:42.113697 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:42.614293 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:43.113591 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:43.614231 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:44.114413 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:44.614537 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:45.114187 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:45.613592 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:46.113667 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:46.613755 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:47.113597 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:47.614262 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:48.113463 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:48.613700 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:49.113578 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:49.614192 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:50.113501 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:50.613492 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:51.114160 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:51.613924 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:52.114491 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:52.613532 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:53.113608 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:53.613620 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:54.114432 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:54.614359 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:55.114461 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:55.614143 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:56.113587 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:56.614451 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:57.113619 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:57.613622 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:58.113547 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:58.614429 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:59.113617 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:59.613534 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:00.124126 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:00.124233 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:00.200982 1302865 cri.go:89] found id: ""
	I1213 14:57:00.201003 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.201011 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:00.201018 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:00.201100 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:00.237755 1302865 cri.go:89] found id: ""
	I1213 14:57:00.237770 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.237778 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:00.237783 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:00.237861 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:00.301679 1302865 cri.go:89] found id: ""
	I1213 14:57:00.301694 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.301702 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:00.301709 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:00.301778 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:00.347228 1302865 cri.go:89] found id: ""
	I1213 14:57:00.347243 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.347251 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:00.347256 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:00.347356 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:00.376454 1302865 cri.go:89] found id: ""
	I1213 14:57:00.376471 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.376479 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:00.376485 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:00.376555 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:00.408967 1302865 cri.go:89] found id: ""
	I1213 14:57:00.408982 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.408989 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:00.408995 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:00.409059 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:00.437494 1302865 cri.go:89] found id: ""
	I1213 14:57:00.437509 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.437516 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:00.437524 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:00.437534 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:00.493840 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:00.493860 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:00.511767 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:00.511785 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:00.579231 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:00.570332   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.571045   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.572740   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.573345   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.575117   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:00.570332   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.571045   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.572740   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.573345   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.575117   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:00.579242 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:00.579253 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:00.641446 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:00.641467 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:03.171486 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:03.181873 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:03.181935 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:03.212211 1302865 cri.go:89] found id: ""
	I1213 14:57:03.212226 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.212232 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:03.212244 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:03.212304 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:03.237934 1302865 cri.go:89] found id: ""
	I1213 14:57:03.237949 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.237957 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:03.237962 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:03.238034 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:03.263822 1302865 cri.go:89] found id: ""
	I1213 14:57:03.263836 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.263843 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:03.263848 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:03.263910 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:03.289876 1302865 cri.go:89] found id: ""
	I1213 14:57:03.289890 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.289898 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:03.289902 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:03.289965 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:03.317957 1302865 cri.go:89] found id: ""
	I1213 14:57:03.317972 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.317979 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:03.318000 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:03.318060 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:03.346780 1302865 cri.go:89] found id: ""
	I1213 14:57:03.346793 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.346800 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:03.346805 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:03.346864 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:03.371472 1302865 cri.go:89] found id: ""
	I1213 14:57:03.371485 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.371493 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:03.371501 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:03.371512 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:03.399569 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:03.399588 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:03.454307 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:03.454327 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:03.472933 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:03.472951 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:03.538528 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:03.529465   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.530241   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.532007   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.532517   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.534246   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:03.529465   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.530241   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.532007   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.532517   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.534246   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:03.538539 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:03.538550 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:06.101738 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:06.112716 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:06.112778 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:06.139740 1302865 cri.go:89] found id: ""
	I1213 14:57:06.139753 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.139759 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:06.139770 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:06.139831 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:06.169906 1302865 cri.go:89] found id: ""
	I1213 14:57:06.169920 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.169927 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:06.169932 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:06.169993 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:06.194468 1302865 cri.go:89] found id: ""
	I1213 14:57:06.194482 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.194492 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:06.194497 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:06.194556 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:06.219346 1302865 cri.go:89] found id: ""
	I1213 14:57:06.219360 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.219367 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:06.219372 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:06.219466 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:06.244844 1302865 cri.go:89] found id: ""
	I1213 14:57:06.244858 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.244865 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:06.244870 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:06.244928 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:06.269412 1302865 cri.go:89] found id: ""
	I1213 14:57:06.269425 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.269433 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:06.269438 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:06.269498 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:06.293947 1302865 cri.go:89] found id: ""
	I1213 14:57:06.293960 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.293967 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:06.293975 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:06.293991 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:06.320232 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:06.320249 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:06.375210 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:06.375229 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:06.392065 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:06.392081 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:06.457910 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:06.449858   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.450478   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.452122   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.452601   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.454099   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:06.449858   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.450478   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.452122   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.452601   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.454099   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:06.457920 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:06.457931 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:09.020376 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:09.030584 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:09.030644 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:09.057441 1302865 cri.go:89] found id: ""
	I1213 14:57:09.057455 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.057462 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:09.057467 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:09.057529 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:09.091252 1302865 cri.go:89] found id: ""
	I1213 14:57:09.091266 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.091273 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:09.091277 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:09.091357 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:09.133954 1302865 cri.go:89] found id: ""
	I1213 14:57:09.133969 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.133976 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:09.133981 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:09.134041 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:09.161351 1302865 cri.go:89] found id: ""
	I1213 14:57:09.161365 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.161372 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:09.161386 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:09.161449 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:09.186493 1302865 cri.go:89] found id: ""
	I1213 14:57:09.186507 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.186515 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:09.186519 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:09.186579 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:09.210752 1302865 cri.go:89] found id: ""
	I1213 14:57:09.210766 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.210774 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:09.210779 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:09.210841 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:09.235216 1302865 cri.go:89] found id: ""
	I1213 14:57:09.235231 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.235238 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:09.235246 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:09.235256 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:09.290010 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:09.290030 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:09.307105 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:09.307122 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:09.373837 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:09.365574   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.366312   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.368046   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.368412   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.369910   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:09.365574   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.366312   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.368046   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.368412   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.369910   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:09.373848 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:09.373862 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:09.435916 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:09.435937 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:11.968947 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:11.978917 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:11.978976 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:12.003367 1302865 cri.go:89] found id: ""
	I1213 14:57:12.003387 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.003395 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:12.003401 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:12.003472 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:12.030862 1302865 cri.go:89] found id: ""
	I1213 14:57:12.030876 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.030883 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:12.030889 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:12.030947 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:12.055991 1302865 cri.go:89] found id: ""
	I1213 14:57:12.056006 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.056014 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:12.056020 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:12.056078 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:12.088685 1302865 cri.go:89] found id: ""
	I1213 14:57:12.088699 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.088706 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:12.088711 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:12.088771 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:12.119175 1302865 cri.go:89] found id: ""
	I1213 14:57:12.119199 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.119206 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:12.119212 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:12.119276 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:12.148170 1302865 cri.go:89] found id: ""
	I1213 14:57:12.148192 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.148199 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:12.148204 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:12.148276 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:12.173907 1302865 cri.go:89] found id: ""
	I1213 14:57:12.173929 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.173936 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:12.173944 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:12.173955 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:12.230024 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:12.230044 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:12.249202 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:12.249219 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:12.317257 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:12.307463   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.308131   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.309930   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.310545   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.312207   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:12.307463   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.308131   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.309930   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.310545   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.312207   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:12.317267 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:12.317284 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:12.384433 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:12.384455 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:14.917091 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:14.927788 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:14.927850 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:14.953190 1302865 cri.go:89] found id: ""
	I1213 14:57:14.953205 1302865 logs.go:282] 0 containers: []
	W1213 14:57:14.953212 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:14.953226 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:14.953289 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:14.978043 1302865 cri.go:89] found id: ""
	I1213 14:57:14.978068 1302865 logs.go:282] 0 containers: []
	W1213 14:57:14.978075 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:14.978081 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:14.978175 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:15.004731 1302865 cri.go:89] found id: ""
	I1213 14:57:15.004749 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.004756 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:15.004761 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:15.004846 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:15.048669 1302865 cri.go:89] found id: ""
	I1213 14:57:15.048685 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.048693 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:15.048698 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:15.048777 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:15.085505 1302865 cri.go:89] found id: ""
	I1213 14:57:15.085520 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.085528 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:15.085534 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:15.085607 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:15.124753 1302865 cri.go:89] found id: ""
	I1213 14:57:15.124776 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.124784 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:15.124790 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:15.124860 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:15.168668 1302865 cri.go:89] found id: ""
	I1213 14:57:15.168682 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.168690 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:15.168698 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:15.168720 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:15.236878 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:15.228546   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.229113   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.230853   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.231344   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.232993   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:15.228546   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.229113   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.230853   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.231344   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.232993   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:15.236889 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:15.236899 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:15.299331 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:15.299361 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:15.331125 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:15.331142 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:15.391451 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:15.391478 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:17.910179 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:17.920514 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:17.920590 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:17.945066 1302865 cri.go:89] found id: ""
	I1213 14:57:17.945081 1302865 logs.go:282] 0 containers: []
	W1213 14:57:17.945088 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:17.945094 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:17.945152 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:17.972856 1302865 cri.go:89] found id: ""
	I1213 14:57:17.972870 1302865 logs.go:282] 0 containers: []
	W1213 14:57:17.972878 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:17.972882 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:17.972944 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:17.999205 1302865 cri.go:89] found id: ""
	I1213 14:57:17.999219 1302865 logs.go:282] 0 containers: []
	W1213 14:57:17.999226 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:17.999231 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:17.999288 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:18.034164 1302865 cri.go:89] found id: ""
	I1213 14:57:18.034178 1302865 logs.go:282] 0 containers: []
	W1213 14:57:18.034185 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:18.034190 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:18.034255 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:18.060346 1302865 cri.go:89] found id: ""
	I1213 14:57:18.060361 1302865 logs.go:282] 0 containers: []
	W1213 14:57:18.060368 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:18.060373 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:18.060438 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:18.089688 1302865 cri.go:89] found id: ""
	I1213 14:57:18.089702 1302865 logs.go:282] 0 containers: []
	W1213 14:57:18.089710 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:18.089718 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:18.089780 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:18.128859 1302865 cri.go:89] found id: ""
	I1213 14:57:18.128874 1302865 logs.go:282] 0 containers: []
	W1213 14:57:18.128881 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:18.128889 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:18.128899 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:18.188820 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:18.188842 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:18.206229 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:18.206247 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:18.277989 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:18.269084   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.269641   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.271489   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.272150   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.273959   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:18.269084   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.269641   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.271489   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.272150   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.273959   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:18.277999 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:18.278009 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:18.339945 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:18.339965 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:20.869114 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:20.879800 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:20.879866 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:20.905760 1302865 cri.go:89] found id: ""
	I1213 14:57:20.905774 1302865 logs.go:282] 0 containers: []
	W1213 14:57:20.905781 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:20.905786 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:20.905849 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:20.931353 1302865 cri.go:89] found id: ""
	I1213 14:57:20.931367 1302865 logs.go:282] 0 containers: []
	W1213 14:57:20.931374 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:20.931379 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:20.931445 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:20.956682 1302865 cri.go:89] found id: ""
	I1213 14:57:20.956696 1302865 logs.go:282] 0 containers: []
	W1213 14:57:20.956704 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:20.956709 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:20.956769 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:20.980824 1302865 cri.go:89] found id: ""
	I1213 14:57:20.980838 1302865 logs.go:282] 0 containers: []
	W1213 14:57:20.980845 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:20.980850 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:20.980909 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:21.008951 1302865 cri.go:89] found id: ""
	I1213 14:57:21.008974 1302865 logs.go:282] 0 containers: []
	W1213 14:57:21.008982 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:21.008987 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:21.009058 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:21.038190 1302865 cri.go:89] found id: ""
	I1213 14:57:21.038204 1302865 logs.go:282] 0 containers: []
	W1213 14:57:21.038211 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:21.038216 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:21.038277 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:21.063608 1302865 cri.go:89] found id: ""
	I1213 14:57:21.063622 1302865 logs.go:282] 0 containers: []
	W1213 14:57:21.063630 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:21.063638 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:21.063648 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:21.132089 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:21.132109 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:21.171889 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:21.171908 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:21.230786 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:21.230806 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:21.247733 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:21.247753 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:21.318785 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:21.309659   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.310476   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.312082   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.312759   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.314434   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:21.309659   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.310476   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.312082   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.312759   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.314434   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:23.819828 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:23.830541 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:23.830604 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:23.853826 1302865 cri.go:89] found id: ""
	I1213 14:57:23.853840 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.853856 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:23.853862 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:23.853933 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:23.879146 1302865 cri.go:89] found id: ""
	I1213 14:57:23.879169 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.879177 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:23.879182 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:23.879253 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:23.904357 1302865 cri.go:89] found id: ""
	I1213 14:57:23.904371 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.904379 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:23.904384 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:23.904450 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:23.929036 1302865 cri.go:89] found id: ""
	I1213 14:57:23.929050 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.929058 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:23.929063 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:23.929124 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:23.954748 1302865 cri.go:89] found id: ""
	I1213 14:57:23.954762 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.954779 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:23.954785 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:23.954854 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:23.979661 1302865 cri.go:89] found id: ""
	I1213 14:57:23.979676 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.979683 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:23.979687 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:23.979750 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:24.009902 1302865 cri.go:89] found id: ""
	I1213 14:57:24.009918 1302865 logs.go:282] 0 containers: []
	W1213 14:57:24.009925 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:24.009935 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:24.009946 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:24.079943 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:24.070877   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.071501   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.073325   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.073881   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.075497   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:24.070877   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.071501   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.073325   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.073881   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.075497   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:24.079954 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:24.079966 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:24.144015 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:24.144037 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:24.174637 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:24.174654 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:24.235392 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:24.235413 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:26.753238 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:26.763339 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:26.763404 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:26.788474 1302865 cri.go:89] found id: ""
	I1213 14:57:26.788487 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.788494 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:26.788499 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:26.788559 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:26.814440 1302865 cri.go:89] found id: ""
	I1213 14:57:26.814454 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.814461 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:26.814466 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:26.814524 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:26.841795 1302865 cri.go:89] found id: ""
	I1213 14:57:26.841809 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.841816 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:26.841821 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:26.841880 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:26.869399 1302865 cri.go:89] found id: ""
	I1213 14:57:26.869413 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.869420 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:26.869425 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:26.869482 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:26.892445 1302865 cri.go:89] found id: ""
	I1213 14:57:26.892459 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.892467 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:26.892472 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:26.892535 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:26.916537 1302865 cri.go:89] found id: ""
	I1213 14:57:26.916558 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.916565 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:26.916570 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:26.916639 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:26.940628 1302865 cri.go:89] found id: ""
	I1213 14:57:26.940650 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.940658 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:26.940671 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:26.940681 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:26.969808 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:26.969827 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:27.025191 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:27.025211 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:27.042465 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:27.042482 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:27.122593 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:27.114680   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.115527   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.117159   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.117448   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.118857   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:27.114680   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.115527   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.117159   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.117448   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.118857   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:27.122618 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:27.122628 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:29.693191 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:29.703585 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:29.703652 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:29.732578 1302865 cri.go:89] found id: ""
	I1213 14:57:29.732593 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.732614 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:29.732621 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:29.732686 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:29.757517 1302865 cri.go:89] found id: ""
	I1213 14:57:29.757531 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.757538 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:29.757543 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:29.757604 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:29.785456 1302865 cri.go:89] found id: ""
	I1213 14:57:29.785470 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.785476 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:29.785482 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:29.785544 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:29.809997 1302865 cri.go:89] found id: ""
	I1213 14:57:29.810011 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.810018 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:29.810023 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:29.810085 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:29.834277 1302865 cri.go:89] found id: ""
	I1213 14:57:29.834292 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.834299 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:29.834304 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:29.834366 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:29.858653 1302865 cri.go:89] found id: ""
	I1213 14:57:29.858667 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.858675 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:29.858686 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:29.858749 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:29.884435 1302865 cri.go:89] found id: ""
	I1213 14:57:29.884450 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.884456 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:29.884464 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:29.884477 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:29.911338 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:29.911356 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:29.966819 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:29.966838 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:29.985125 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:29.985141 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:30.070789 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:30.059761   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.060525   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.063144   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.063902   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.066109   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:30.059761   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.060525   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.063144   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.063902   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.066109   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:30.070800 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:30.070811 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:32.643832 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:32.654329 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:32.654399 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:32.687375 1302865 cri.go:89] found id: ""
	I1213 14:57:32.687390 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.687398 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:32.687403 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:32.687465 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:32.712437 1302865 cri.go:89] found id: ""
	I1213 14:57:32.712452 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.712460 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:32.712465 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:32.712529 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:32.738220 1302865 cri.go:89] found id: ""
	I1213 14:57:32.738234 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.738241 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:32.738247 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:32.738310 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:32.763211 1302865 cri.go:89] found id: ""
	I1213 14:57:32.763226 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.763233 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:32.763238 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:32.763299 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:32.789049 1302865 cri.go:89] found id: ""
	I1213 14:57:32.789063 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.789071 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:32.789077 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:32.789141 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:32.815194 1302865 cri.go:89] found id: ""
	I1213 14:57:32.815208 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.815215 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:32.815221 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:32.815284 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:32.840629 1302865 cri.go:89] found id: ""
	I1213 14:57:32.840646 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.840653 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:32.840661 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:32.840672 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:32.868556 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:32.868574 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:32.923451 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:32.923472 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:32.940492 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:32.940508 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:33.014646 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:33.000281   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.001080   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.003301   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.004000   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.006154   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:33.000281   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.001080   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.003301   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.004000   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.006154   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:33.014656 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:33.014680 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:35.576582 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:35.586876 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:35.586939 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:35.612619 1302865 cri.go:89] found id: ""
	I1213 14:57:35.612634 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.612641 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:35.612646 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:35.612714 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:35.637275 1302865 cri.go:89] found id: ""
	I1213 14:57:35.637289 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.637296 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:35.637302 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:35.637363 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:35.661936 1302865 cri.go:89] found id: ""
	I1213 14:57:35.661950 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.661957 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:35.661962 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:35.662035 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:35.691702 1302865 cri.go:89] found id: ""
	I1213 14:57:35.691716 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.691722 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:35.691727 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:35.691789 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:35.719594 1302865 cri.go:89] found id: ""
	I1213 14:57:35.719608 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.719614 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:35.719619 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:35.719685 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:35.747602 1302865 cri.go:89] found id: ""
	I1213 14:57:35.747617 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.747624 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:35.747629 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:35.747690 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:35.772489 1302865 cri.go:89] found id: ""
	I1213 14:57:35.772503 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.772510 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:35.772517 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:35.772534 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:35.801457 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:35.801474 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:35.859688 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:35.859708 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:35.877069 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:35.877087 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:35.942565 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:35.934502   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.935224   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.936888   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.937197   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.938690   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:35.934502   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.935224   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.936888   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.937197   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.938690   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:35.942576 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:35.942595 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:38.506862 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:38.517509 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:38.517575 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:38.542481 1302865 cri.go:89] found id: ""
	I1213 14:57:38.542496 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.542512 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:38.542517 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:38.542586 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:38.567177 1302865 cri.go:89] found id: ""
	I1213 14:57:38.567191 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.567198 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:38.567202 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:38.567264 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:38.591952 1302865 cri.go:89] found id: ""
	I1213 14:57:38.591967 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.591974 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:38.591979 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:38.592036 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:38.615589 1302865 cri.go:89] found id: ""
	I1213 14:57:38.615604 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.615619 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:38.615625 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:38.615697 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:38.641025 1302865 cri.go:89] found id: ""
	I1213 14:57:38.641039 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.641046 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:38.641051 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:38.641115 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:38.666245 1302865 cri.go:89] found id: ""
	I1213 14:57:38.666259 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.666276 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:38.666282 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:38.666355 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:38.691177 1302865 cri.go:89] found id: ""
	I1213 14:57:38.691191 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.691198 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:38.691206 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:38.691217 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:38.748984 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:38.749004 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:38.765774 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:38.765791 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:38.833656 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:38.825292   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.825956   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.827650   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.828246   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.829830   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:38.825292   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.825956   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.827650   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.828246   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.829830   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:38.833672 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:38.833683 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:38.895503 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:38.895524 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:41.424760 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:41.435082 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:41.435154 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:41.460250 1302865 cri.go:89] found id: ""
	I1213 14:57:41.460265 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.460273 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:41.460278 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:41.460338 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:41.490003 1302865 cri.go:89] found id: ""
	I1213 14:57:41.490017 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.490024 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:41.490029 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:41.490094 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:41.515086 1302865 cri.go:89] found id: ""
	I1213 14:57:41.515100 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.515107 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:41.515112 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:41.515173 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:41.540169 1302865 cri.go:89] found id: ""
	I1213 14:57:41.540183 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.540205 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:41.540211 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:41.540279 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:41.564345 1302865 cri.go:89] found id: ""
	I1213 14:57:41.564358 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.564365 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:41.564370 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:41.564429 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:41.589001 1302865 cri.go:89] found id: ""
	I1213 14:57:41.589015 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.589022 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:41.589027 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:41.589091 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:41.617434 1302865 cri.go:89] found id: ""
	I1213 14:57:41.617447 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.617455 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:41.617462 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:41.617471 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:41.683384 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:41.683411 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:41.711592 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:41.711611 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:41.769286 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:41.769305 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:41.786199 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:41.786219 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:41.854485 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:41.846163   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.846886   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.848432   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.848940   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.850514   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:41.846163   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.846886   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.848432   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.848940   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.850514   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:44.355606 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:44.369969 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:44.370032 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:44.401460 1302865 cri.go:89] found id: ""
	I1213 14:57:44.401474 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.401481 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:44.401486 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:44.401548 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:44.431513 1302865 cri.go:89] found id: ""
	I1213 14:57:44.431527 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.431534 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:44.431539 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:44.431600 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:44.457242 1302865 cri.go:89] found id: ""
	I1213 14:57:44.457256 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.457263 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:44.457268 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:44.457329 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:44.482224 1302865 cri.go:89] found id: ""
	I1213 14:57:44.482238 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.482245 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:44.482250 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:44.482313 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:44.509856 1302865 cri.go:89] found id: ""
	I1213 14:57:44.509871 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.509878 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:44.509884 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:44.509950 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:44.533977 1302865 cri.go:89] found id: ""
	I1213 14:57:44.533992 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.533999 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:44.534005 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:44.534069 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:44.562015 1302865 cri.go:89] found id: ""
	I1213 14:57:44.562029 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.562036 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:44.562044 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:44.562055 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:44.629999 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:44.621407   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.622108   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.623865   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.624500   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.626099   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:44.621407   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.622108   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.623865   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.624500   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.626099   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:44.630009 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:44.630020 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:44.697021 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:44.697042 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:44.725319 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:44.725336 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:44.783033 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:44.783053 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:47.300684 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:47.311369 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:47.311431 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:47.343773 1302865 cri.go:89] found id: ""
	I1213 14:57:47.343787 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.343794 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:47.343800 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:47.343864 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:47.373867 1302865 cri.go:89] found id: ""
	I1213 14:57:47.373881 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.373888 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:47.373893 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:47.373950 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:47.409488 1302865 cri.go:89] found id: ""
	I1213 14:57:47.409503 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.409510 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:47.409515 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:47.409576 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:47.436144 1302865 cri.go:89] found id: ""
	I1213 14:57:47.436160 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.436166 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:47.436172 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:47.436231 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:47.459642 1302865 cri.go:89] found id: ""
	I1213 14:57:47.459656 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.459664 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:47.459669 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:47.459728 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:47.488525 1302865 cri.go:89] found id: ""
	I1213 14:57:47.488539 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.488546 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:47.488589 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:47.488660 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:47.513277 1302865 cri.go:89] found id: ""
	I1213 14:57:47.513304 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.513312 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:47.513320 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:47.513333 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:47.569182 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:47.569201 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:47.586016 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:47.586033 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:47.657399 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:47.648527   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.649441   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.651289   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.651877   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.653400   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:47.648527   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.649441   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.651289   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.651877   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.653400   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:47.657410 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:47.657421 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:47.719756 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:47.719776 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:50.250366 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:50.261360 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:50.261430 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:50.285575 1302865 cri.go:89] found id: ""
	I1213 14:57:50.285588 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.285595 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:50.285601 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:50.285657 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:50.313925 1302865 cri.go:89] found id: ""
	I1213 14:57:50.313939 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.313946 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:50.313951 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:50.314025 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:50.350634 1302865 cri.go:89] found id: ""
	I1213 14:57:50.350653 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.350660 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:50.350665 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:50.350725 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:50.377901 1302865 cri.go:89] found id: ""
	I1213 14:57:50.377915 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.377922 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:50.377927 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:50.377987 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:50.408528 1302865 cri.go:89] found id: ""
	I1213 14:57:50.408550 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.408557 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:50.408562 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:50.408637 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:50.434189 1302865 cri.go:89] found id: ""
	I1213 14:57:50.434203 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.434212 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:50.434217 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:50.434275 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:50.459353 1302865 cri.go:89] found id: ""
	I1213 14:57:50.459367 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.459373 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:50.459381 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:50.459391 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:50.515565 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:50.515585 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:50.532866 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:50.532883 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:50.599094 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:50.590849   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.591681   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.593186   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.593701   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.595193   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:50.590849   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.591681   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.593186   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.593701   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.595193   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:50.599104 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:50.599115 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:50.663140 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:50.663159 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:53.200108 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:53.210621 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:53.210684 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:53.236457 1302865 cri.go:89] found id: ""
	I1213 14:57:53.236471 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.236478 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:53.236483 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:53.236545 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:53.269649 1302865 cri.go:89] found id: ""
	I1213 14:57:53.269664 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.269670 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:53.269677 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:53.269738 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:53.293759 1302865 cri.go:89] found id: ""
	I1213 14:57:53.293774 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.293781 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:53.293786 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:53.293846 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:53.318675 1302865 cri.go:89] found id: ""
	I1213 14:57:53.318690 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.318696 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:53.318701 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:53.318765 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:53.353544 1302865 cri.go:89] found id: ""
	I1213 14:57:53.353558 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.353564 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:53.353569 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:53.353630 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:53.381535 1302865 cri.go:89] found id: ""
	I1213 14:57:53.381549 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.381565 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:53.381571 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:53.381641 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:53.408473 1302865 cri.go:89] found id: ""
	I1213 14:57:53.408487 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.408494 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:53.408502 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:53.408514 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:53.463646 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:53.463670 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:53.480500 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:53.480518 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:53.545969 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:53.538062   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.538490   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.540123   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.540597   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.542043   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:53.538062   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.538490   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.540123   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.540597   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.542043   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:53.545979 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:53.545991 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:53.607729 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:53.607750 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:56.139407 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:56.150264 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:56.150335 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:56.175852 1302865 cri.go:89] found id: ""
	I1213 14:57:56.175866 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.175873 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:56.175878 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:56.175942 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:56.202887 1302865 cri.go:89] found id: ""
	I1213 14:57:56.202901 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.202908 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:56.202921 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:56.202981 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:56.229038 1302865 cri.go:89] found id: ""
	I1213 14:57:56.229053 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.229060 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:56.229065 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:56.229125 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:56.253081 1302865 cri.go:89] found id: ""
	I1213 14:57:56.253096 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.253103 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:56.253108 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:56.253172 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:56.277822 1302865 cri.go:89] found id: ""
	I1213 14:57:56.277836 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.277843 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:56.277849 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:56.277910 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:56.302419 1302865 cri.go:89] found id: ""
	I1213 14:57:56.302435 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.302442 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:56.302447 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:56.302508 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:56.327036 1302865 cri.go:89] found id: ""
	I1213 14:57:56.327050 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.327057 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:56.327066 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:56.327078 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:56.353968 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:56.353986 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:56.426915 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:56.418628   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.419173   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.420794   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.421363   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.422912   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:56.418628   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.419173   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.420794   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.421363   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.422912   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:56.426926 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:56.426943 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:56.488491 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:56.488513 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:56.516737 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:56.516753 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:59.077330 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:59.087745 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:59.087809 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:59.113689 1302865 cri.go:89] found id: ""
	I1213 14:57:59.113703 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.113710 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:59.113715 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:59.113774 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:59.138884 1302865 cri.go:89] found id: ""
	I1213 14:57:59.138898 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.138905 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:59.138911 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:59.138976 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:59.164226 1302865 cri.go:89] found id: ""
	I1213 14:57:59.164240 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.164246 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:59.164254 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:59.164312 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:59.189753 1302865 cri.go:89] found id: ""
	I1213 14:57:59.189767 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.189774 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:59.189779 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:59.189840 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:59.219066 1302865 cri.go:89] found id: ""
	I1213 14:57:59.219080 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.219086 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:59.219092 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:59.219152 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:59.243456 1302865 cri.go:89] found id: ""
	I1213 14:57:59.243470 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.243477 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:59.243482 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:59.243544 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:59.267676 1302865 cri.go:89] found id: ""
	I1213 14:57:59.267692 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.267699 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:59.267707 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:59.267719 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:59.284600 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:59.284617 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:59.356184 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:59.346354   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.347267   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.349163   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.349876   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.351596   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:59.346354   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.347267   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.349163   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.349876   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.351596   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:59.356202 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:59.356215 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:59.427513 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:59.427535 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:59.459203 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:59.459220 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:02.016233 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:02.027182 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:02.027246 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:02.053453 1302865 cri.go:89] found id: ""
	I1213 14:58:02.053467 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.053475 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:02.053480 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:02.053543 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:02.081288 1302865 cri.go:89] found id: ""
	I1213 14:58:02.081303 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.081310 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:02.081315 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:02.081377 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:02.106556 1302865 cri.go:89] found id: ""
	I1213 14:58:02.106572 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.106579 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:02.106585 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:02.106645 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:02.131201 1302865 cri.go:89] found id: ""
	I1213 14:58:02.131215 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.131221 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:02.131226 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:02.131286 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:02.156170 1302865 cri.go:89] found id: ""
	I1213 14:58:02.156194 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.156202 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:02.156207 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:02.156275 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:02.185059 1302865 cri.go:89] found id: ""
	I1213 14:58:02.185073 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.185080 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:02.185086 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:02.185153 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:02.209854 1302865 cri.go:89] found id: ""
	I1213 14:58:02.209870 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.209884 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:02.209893 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:02.209903 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:02.279934 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:02.270936   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.271416   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.273300   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.273902   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.275651   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:02.270936   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.271416   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.273300   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.273902   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.275651   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:02.279958 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:02.279970 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:02.341869 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:02.341888 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:02.370761 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:02.370783 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:02.431851 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:02.431869 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:04.950137 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:04.960995 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:04.961059 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:04.986243 1302865 cri.go:89] found id: ""
	I1213 14:58:04.986257 1302865 logs.go:282] 0 containers: []
	W1213 14:58:04.986264 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:04.986269 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:04.986329 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:05.016170 1302865 cri.go:89] found id: ""
	I1213 14:58:05.016192 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.016200 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:05.016206 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:05.016270 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:05.042103 1302865 cri.go:89] found id: ""
	I1213 14:58:05.042117 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.042124 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:05.042129 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:05.042188 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:05.066050 1302865 cri.go:89] found id: ""
	I1213 14:58:05.066065 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.066071 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:05.066077 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:05.066141 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:05.091600 1302865 cri.go:89] found id: ""
	I1213 14:58:05.091615 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.091623 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:05.091634 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:05.091698 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:05.117406 1302865 cri.go:89] found id: ""
	I1213 14:58:05.117420 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.117427 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:05.117432 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:05.117491 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:05.143774 1302865 cri.go:89] found id: ""
	I1213 14:58:05.143788 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.143794 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:05.143802 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:05.143823 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:05.198717 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:05.198736 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:05.216110 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:05.216127 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:05.281771 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:05.273944   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.274471   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.276069   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.276389   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.277907   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:05.273944   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.274471   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.276069   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.276389   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.277907   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:05.281792 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:05.281804 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:05.344051 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:05.344070 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:07.872032 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:07.883862 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:07.883925 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:07.908603 1302865 cri.go:89] found id: ""
	I1213 14:58:07.908616 1302865 logs.go:282] 0 containers: []
	W1213 14:58:07.908623 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:07.908628 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:07.908696 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:07.932609 1302865 cri.go:89] found id: ""
	I1213 14:58:07.932624 1302865 logs.go:282] 0 containers: []
	W1213 14:58:07.932631 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:07.932636 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:07.932729 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:07.957476 1302865 cri.go:89] found id: ""
	I1213 14:58:07.957490 1302865 logs.go:282] 0 containers: []
	W1213 14:58:07.957497 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:07.957502 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:07.957561 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:07.983994 1302865 cri.go:89] found id: ""
	I1213 14:58:07.984014 1302865 logs.go:282] 0 containers: []
	W1213 14:58:07.984022 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:07.984027 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:07.984090 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:08.016758 1302865 cri.go:89] found id: ""
	I1213 14:58:08.016772 1302865 logs.go:282] 0 containers: []
	W1213 14:58:08.016779 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:08.016784 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:08.016850 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:08.048311 1302865 cri.go:89] found id: ""
	I1213 14:58:08.048326 1302865 logs.go:282] 0 containers: []
	W1213 14:58:08.048333 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:08.048338 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:08.048404 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:08.074196 1302865 cri.go:89] found id: ""
	I1213 14:58:08.074211 1302865 logs.go:282] 0 containers: []
	W1213 14:58:08.074219 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:08.074226 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:08.074237 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:08.139046 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:08.139073 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:08.167121 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:08.167141 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:08.222634 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:08.222664 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:08.240309 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:08.240325 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:08.310479 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:08.301605   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.302332   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.304082   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.304629   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.306246   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:08.301605   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.302332   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.304082   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.304629   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.306246   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:10.810723 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:10.820844 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:10.820953 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:10.865862 1302865 cri.go:89] found id: ""
	I1213 14:58:10.865875 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.865882 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:10.865888 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:10.865959 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:10.896607 1302865 cri.go:89] found id: ""
	I1213 14:58:10.896621 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.896628 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:10.896634 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:10.896710 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:10.924657 1302865 cri.go:89] found id: ""
	I1213 14:58:10.924671 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.924678 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:10.924684 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:10.924748 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:10.949300 1302865 cri.go:89] found id: ""
	I1213 14:58:10.949314 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.949321 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:10.949326 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:10.949388 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:10.973896 1302865 cri.go:89] found id: ""
	I1213 14:58:10.973910 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.973917 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:10.973922 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:10.973983 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:10.998200 1302865 cri.go:89] found id: ""
	I1213 14:58:10.998214 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.998231 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:10.998237 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:10.998295 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:11.024841 1302865 cri.go:89] found id: ""
	I1213 14:58:11.024856 1302865 logs.go:282] 0 containers: []
	W1213 14:58:11.024863 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:11.024871 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:11.024886 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:11.092350 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:11.083613   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.084237   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.086051   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.086531   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.088132   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:11.083613   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.084237   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.086051   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.086531   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.088132   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:11.092361 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:11.092372 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:11.154591 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:11.154612 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:11.187883 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:11.187899 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:11.248594 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:11.248613 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:13.766160 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:13.776057 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:13.776115 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:13.800863 1302865 cri.go:89] found id: ""
	I1213 14:58:13.800877 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.800884 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:13.800889 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:13.800990 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:13.825283 1302865 cri.go:89] found id: ""
	I1213 14:58:13.825298 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.825305 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:13.825309 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:13.825368 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:13.857732 1302865 cri.go:89] found id: ""
	I1213 14:58:13.857746 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.857753 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:13.857758 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:13.857816 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:13.891546 1302865 cri.go:89] found id: ""
	I1213 14:58:13.891560 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.891566 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:13.891572 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:13.891629 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:13.918725 1302865 cri.go:89] found id: ""
	I1213 14:58:13.918738 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.918746 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:13.918750 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:13.918810 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:13.942434 1302865 cri.go:89] found id: ""
	I1213 14:58:13.942448 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.942455 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:13.942460 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:13.942521 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:13.966591 1302865 cri.go:89] found id: ""
	I1213 14:58:13.966606 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.966613 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:13.966621 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:13.966632 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:13.983200 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:13.983217 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:14.050601 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:14.041722   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.042310   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.044028   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.044598   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.046179   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:14.041722   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.042310   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.044028   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.044598   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.046179   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:14.050610 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:14.050622 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:14.111742 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:14.111761 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:14.139171 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:14.139189 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:16.694504 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:16.704690 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:16.704753 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:16.730421 1302865 cri.go:89] found id: ""
	I1213 14:58:16.730436 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.730444 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:16.730449 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:16.730510 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:16.755642 1302865 cri.go:89] found id: ""
	I1213 14:58:16.755657 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.755676 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:16.755681 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:16.755741 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:16.780583 1302865 cri.go:89] found id: ""
	I1213 14:58:16.780597 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.780604 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:16.780610 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:16.780685 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:16.809520 1302865 cri.go:89] found id: ""
	I1213 14:58:16.809534 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.809542 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:16.809547 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:16.809606 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:16.845772 1302865 cri.go:89] found id: ""
	I1213 14:58:16.845787 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.845794 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:16.845799 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:16.845867 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:16.871303 1302865 cri.go:89] found id: ""
	I1213 14:58:16.871338 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.871345 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:16.871350 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:16.871411 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:16.897846 1302865 cri.go:89] found id: ""
	I1213 14:58:16.897859 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.897866 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:16.897875 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:16.897885 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:16.959059 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:16.959079 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:16.996406 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:16.996421 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:17.052568 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:17.052589 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:17.069678 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:17.069696 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:17.133677 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:17.125422   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.125964   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.127591   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.128083   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.129662   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:17.125422   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.125964   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.127591   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.128083   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.129662   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:19.633920 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:19.644044 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:19.644109 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:19.668667 1302865 cri.go:89] found id: ""
	I1213 14:58:19.668681 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.668688 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:19.668693 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:19.668759 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:19.693045 1302865 cri.go:89] found id: ""
	I1213 14:58:19.693059 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.693066 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:19.693071 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:19.693134 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:19.717622 1302865 cri.go:89] found id: ""
	I1213 14:58:19.717637 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.717643 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:19.717649 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:19.717708 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:19.742933 1302865 cri.go:89] found id: ""
	I1213 14:58:19.742948 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.742954 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:19.742962 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:19.743024 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:19.767055 1302865 cri.go:89] found id: ""
	I1213 14:58:19.767069 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.767076 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:19.767081 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:19.767139 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:19.793086 1302865 cri.go:89] found id: ""
	I1213 14:58:19.793100 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.793107 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:19.793112 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:19.793172 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:19.816884 1302865 cri.go:89] found id: ""
	I1213 14:58:19.816898 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.816905 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:19.816912 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:19.816927 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:19.833746 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:19.833763 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:19.912181 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:19.904591   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.905016   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.906518   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.906824   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.908282   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:19.904591   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.905016   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.906518   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.906824   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.908282   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:19.912191 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:19.912202 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:19.973611 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:19.973631 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:20.005249 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:20.005269 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:22.571015 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:22.581487 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:22.581553 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:22.606385 1302865 cri.go:89] found id: ""
	I1213 14:58:22.606399 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.606405 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:22.606411 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:22.606466 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:22.631290 1302865 cri.go:89] found id: ""
	I1213 14:58:22.631304 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.631330 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:22.631341 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:22.631402 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:22.656039 1302865 cri.go:89] found id: ""
	I1213 14:58:22.656053 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.656059 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:22.656064 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:22.656123 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:22.680255 1302865 cri.go:89] found id: ""
	I1213 14:58:22.680268 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.680275 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:22.680281 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:22.680339 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:22.705412 1302865 cri.go:89] found id: ""
	I1213 14:58:22.705426 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.705434 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:22.705439 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:22.705501 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:22.729869 1302865 cri.go:89] found id: ""
	I1213 14:58:22.729885 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.729891 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:22.729897 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:22.729961 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:22.757980 1302865 cri.go:89] found id: ""
	I1213 14:58:22.757994 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.758001 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:22.758009 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:22.758022 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:22.774416 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:22.774433 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:22.850017 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:22.837138   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.837894   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.843564   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.844183   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.845796   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:22.837138   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.837894   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.843564   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.844183   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.845796   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:22.850034 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:22.850045 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:22.916305 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:22.916327 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:22.946422 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:22.946438 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:25.504766 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:25.515062 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:25.515129 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:25.539801 1302865 cri.go:89] found id: ""
	I1213 14:58:25.539815 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.539822 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:25.539827 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:25.539888 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:25.564134 1302865 cri.go:89] found id: ""
	I1213 14:58:25.564148 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.564155 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:25.564159 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:25.564218 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:25.588150 1302865 cri.go:89] found id: ""
	I1213 14:58:25.588165 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.588173 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:25.588178 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:25.588239 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:25.613567 1302865 cri.go:89] found id: ""
	I1213 14:58:25.613581 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.613588 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:25.613593 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:25.613659 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:25.643274 1302865 cri.go:89] found id: ""
	I1213 14:58:25.643290 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.643297 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:25.643303 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:25.643388 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:25.668136 1302865 cri.go:89] found id: ""
	I1213 14:58:25.668150 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.668157 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:25.668162 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:25.668223 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:25.693114 1302865 cri.go:89] found id: ""
	I1213 14:58:25.693128 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.693135 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:25.693143 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:25.693152 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:25.751087 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:25.751106 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:25.768578 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:25.768598 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:25.842306 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:25.826336   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.827040   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.829047   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.830610   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.833626   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:25.826336   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.827040   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.829047   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.830610   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.833626   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:25.842315 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:25.842325 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:25.934744 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:25.934771 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:28.468857 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:28.479478 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:28.479543 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:28.509273 1302865 cri.go:89] found id: ""
	I1213 14:58:28.509286 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.509293 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:28.509299 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:28.509360 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:28.535574 1302865 cri.go:89] found id: ""
	I1213 14:58:28.535588 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.535595 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:28.535601 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:28.535660 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:28.561231 1302865 cri.go:89] found id: ""
	I1213 14:58:28.561244 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.561251 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:28.561256 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:28.561316 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:28.586867 1302865 cri.go:89] found id: ""
	I1213 14:58:28.586881 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.586897 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:28.586903 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:28.586971 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:28.613781 1302865 cri.go:89] found id: ""
	I1213 14:58:28.613795 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.613802 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:28.613807 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:28.613865 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:28.639226 1302865 cri.go:89] found id: ""
	I1213 14:58:28.639247 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.639255 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:28.639260 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:28.639351 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:28.664957 1302865 cri.go:89] found id: ""
	I1213 14:58:28.664971 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.664977 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:28.664985 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:28.664995 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:28.681545 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:28.681562 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:28.746274 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:28.738221   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.738895   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.740429   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.740922   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.742410   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:28.738221   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.738895   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.740429   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.740922   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.742410   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:28.746286 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:28.746297 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:28.811866 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:28.811886 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:28.853916 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:28.853932 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:31.417796 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:31.427841 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:31.427906 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:31.454876 1302865 cri.go:89] found id: ""
	I1213 14:58:31.454890 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.454897 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:31.454903 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:31.454967 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:31.478745 1302865 cri.go:89] found id: ""
	I1213 14:58:31.478763 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.478770 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:31.478774 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:31.478834 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:31.504045 1302865 cri.go:89] found id: ""
	I1213 14:58:31.504059 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.504066 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:31.504071 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:31.504132 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:31.536667 1302865 cri.go:89] found id: ""
	I1213 14:58:31.536687 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.536694 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:31.536699 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:31.536759 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:31.561651 1302865 cri.go:89] found id: ""
	I1213 14:58:31.561665 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.561672 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:31.561679 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:31.561740 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:31.590467 1302865 cri.go:89] found id: ""
	I1213 14:58:31.590487 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.590494 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:31.590499 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:31.590572 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:31.621443 1302865 cri.go:89] found id: ""
	I1213 14:58:31.621457 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.621467 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:31.621475 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:31.621485 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:31.689190 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:31.680703   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.681594   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.682366   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.683913   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.684339   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:31.680703   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.681594   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.682366   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.683913   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.684339   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:31.689199 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:31.689210 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:31.750918 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:31.750940 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:31.777989 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:31.778007 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:31.837415 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:31.837438 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:34.355220 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:34.365583 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:34.365646 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:34.390861 1302865 cri.go:89] found id: ""
	I1213 14:58:34.390875 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.390882 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:34.390887 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:34.390945 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:34.419452 1302865 cri.go:89] found id: ""
	I1213 14:58:34.419466 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.419473 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:34.419478 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:34.419540 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:34.444048 1302865 cri.go:89] found id: ""
	I1213 14:58:34.444062 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.444069 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:34.444073 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:34.444135 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:34.472603 1302865 cri.go:89] found id: ""
	I1213 14:58:34.472617 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.472623 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:34.472629 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:34.472693 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:34.496330 1302865 cri.go:89] found id: ""
	I1213 14:58:34.496344 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.496351 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:34.496356 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:34.496415 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:34.521267 1302865 cri.go:89] found id: ""
	I1213 14:58:34.521281 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.521288 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:34.521294 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:34.521355 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:34.545219 1302865 cri.go:89] found id: ""
	I1213 14:58:34.545234 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.545241 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:34.545248 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:34.545263 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:34.611331 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:34.602304   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.603074   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.604885   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.605533   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.607098   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:34.602304   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.603074   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.604885   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.605533   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.607098   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:34.611342 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:34.611352 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:34.674005 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:34.674023 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:34.701768 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:34.701784 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:34.760313 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:34.760332 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:37.279813 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:37.289901 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:37.289961 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:37.314082 1302865 cri.go:89] found id: ""
	I1213 14:58:37.314097 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.314103 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:37.314115 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:37.314174 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:37.349456 1302865 cri.go:89] found id: ""
	I1213 14:58:37.349470 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.349477 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:37.349482 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:37.349540 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:37.376791 1302865 cri.go:89] found id: ""
	I1213 14:58:37.376805 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.376812 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:37.376817 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:37.376877 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:37.400702 1302865 cri.go:89] found id: ""
	I1213 14:58:37.400717 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.400724 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:37.400730 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:37.400792 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:37.424348 1302865 cri.go:89] found id: ""
	I1213 14:58:37.424363 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.424370 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:37.424375 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:37.424435 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:37.449182 1302865 cri.go:89] found id: ""
	I1213 14:58:37.449197 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.449204 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:37.449209 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:37.449270 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:37.476252 1302865 cri.go:89] found id: ""
	I1213 14:58:37.476266 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.476273 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:37.476280 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:37.476294 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:37.534602 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:37.534621 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:37.552019 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:37.552037 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:37.614270 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:37.605713   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.606478   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.608056   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.608699   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.610244   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:37.605713   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.606478   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.608056   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.608699   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.610244   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:37.614281 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:37.614292 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:37.676894 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:37.676913 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:40.209558 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:40.220003 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:40.220065 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:40.246553 1302865 cri.go:89] found id: ""
	I1213 14:58:40.246567 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.246574 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:40.246579 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:40.246642 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:40.270663 1302865 cri.go:89] found id: ""
	I1213 14:58:40.270677 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.270684 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:40.270689 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:40.270750 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:40.296263 1302865 cri.go:89] found id: ""
	I1213 14:58:40.296278 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.296285 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:40.296292 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:40.296352 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:40.320181 1302865 cri.go:89] found id: ""
	I1213 14:58:40.320195 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.320204 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:40.320208 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:40.320268 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:40.345140 1302865 cri.go:89] found id: ""
	I1213 14:58:40.345155 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.345162 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:40.345167 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:40.345236 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:40.368989 1302865 cri.go:89] found id: ""
	I1213 14:58:40.369003 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.369010 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:40.369015 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:40.369075 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:40.393631 1302865 cri.go:89] found id: ""
	I1213 14:58:40.393646 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.393653 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:40.393661 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:40.393672 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:40.421318 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:40.421334 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:40.480359 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:40.480379 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:40.497525 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:40.497544 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:40.565603 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:40.557124   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.557721   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.559295   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.559804   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.561673   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:40.557124   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.557721   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.559295   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.559804   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.561673   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:40.565614 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:40.565625 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:43.127433 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:43.141684 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:43.141744 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:43.166921 1302865 cri.go:89] found id: ""
	I1213 14:58:43.166935 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.166942 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:43.166947 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:43.167010 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:43.191796 1302865 cri.go:89] found id: ""
	I1213 14:58:43.191810 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.191817 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:43.191823 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:43.191883 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:43.220968 1302865 cri.go:89] found id: ""
	I1213 14:58:43.220982 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.220988 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:43.220993 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:43.221050 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:43.249138 1302865 cri.go:89] found id: ""
	I1213 14:58:43.249153 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.249160 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:43.249166 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:43.249226 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:43.273972 1302865 cri.go:89] found id: ""
	I1213 14:58:43.273986 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.273993 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:43.273998 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:43.274056 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:43.298424 1302865 cri.go:89] found id: ""
	I1213 14:58:43.298439 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.298446 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:43.298451 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:43.298523 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:43.326886 1302865 cri.go:89] found id: ""
	I1213 14:58:43.326900 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.326907 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:43.326915 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:43.326925 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:43.383183 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:43.383202 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:43.401545 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:43.401564 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:43.472321 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:43.463674   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.464321   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.466101   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.466707   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.468411   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:43.463674   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.464321   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.466101   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.466707   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.468411   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:43.472331 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:43.472347 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:43.535483 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:43.535504 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:46.069443 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:46.079671 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:46.079735 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:46.112232 1302865 cri.go:89] found id: ""
	I1213 14:58:46.112246 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.112263 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:46.112268 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:46.112334 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:46.143946 1302865 cri.go:89] found id: ""
	I1213 14:58:46.143960 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.143968 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:46.143973 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:46.144034 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:46.172869 1302865 cri.go:89] found id: ""
	I1213 14:58:46.172893 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.172901 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:46.172906 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:46.172969 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:46.198118 1302865 cri.go:89] found id: ""
	I1213 14:58:46.198132 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.198139 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:46.198144 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:46.198210 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:46.226657 1302865 cri.go:89] found id: ""
	I1213 14:58:46.226672 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.226679 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:46.226689 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:46.226750 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:46.250158 1302865 cri.go:89] found id: ""
	I1213 14:58:46.250183 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.250190 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:46.250199 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:46.250268 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:46.275259 1302865 cri.go:89] found id: ""
	I1213 14:58:46.275274 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.275281 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:46.275303 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:46.275335 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:46.349416 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:46.340779   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.341521   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.343041   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.343652   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.345289   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:46.340779   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.341521   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.343041   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.343652   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.345289   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:46.349427 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:46.349440 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:46.412854 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:46.412874 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:46.443625 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:46.443641 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:46.501088 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:46.501108 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:49.018999 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:49.029334 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:49.029404 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:49.054853 1302865 cri.go:89] found id: ""
	I1213 14:58:49.054867 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.054874 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:49.054879 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:49.054941 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:49.081166 1302865 cri.go:89] found id: ""
	I1213 14:58:49.081185 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.081193 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:49.081198 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:49.081261 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:49.109404 1302865 cri.go:89] found id: ""
	I1213 14:58:49.109418 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.109425 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:49.109430 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:49.109493 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:49.136643 1302865 cri.go:89] found id: ""
	I1213 14:58:49.136658 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.136665 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:49.136670 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:49.136741 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:49.165751 1302865 cri.go:89] found id: ""
	I1213 14:58:49.165765 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.165772 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:49.165777 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:49.165837 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:49.193225 1302865 cri.go:89] found id: ""
	I1213 14:58:49.193239 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.193246 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:49.193252 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:49.193314 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:49.221440 1302865 cri.go:89] found id: ""
	I1213 14:58:49.221455 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.221462 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:49.221470 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:49.221480 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:49.277216 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:49.277234 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:49.293907 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:49.293927 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:49.356075 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:49.348073   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.348458   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.350225   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.350571   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.352111   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:49.348073   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.348458   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.350225   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.350571   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.352111   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:49.356085 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:49.356095 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:49.418015 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:49.418034 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:51.951013 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:51.961457 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:51.961522 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:51.988624 1302865 cri.go:89] found id: ""
	I1213 14:58:51.988638 1302865 logs.go:282] 0 containers: []
	W1213 14:58:51.988645 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:51.988650 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:51.988725 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:52.015499 1302865 cri.go:89] found id: ""
	I1213 14:58:52.015513 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.015520 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:52.015526 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:52.015589 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:52.041762 1302865 cri.go:89] found id: ""
	I1213 14:58:52.041777 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.041784 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:52.041789 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:52.041850 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:52.068323 1302865 cri.go:89] found id: ""
	I1213 14:58:52.068338 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.068345 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:52.068350 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:52.068415 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:52.106065 1302865 cri.go:89] found id: ""
	I1213 14:58:52.106079 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.106086 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:52.106091 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:52.106160 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:52.140252 1302865 cri.go:89] found id: ""
	I1213 14:58:52.140272 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.140279 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:52.140284 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:52.140343 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:52.167100 1302865 cri.go:89] found id: ""
	I1213 14:58:52.167113 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.167120 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:52.167128 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:52.167138 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:52.226191 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:52.226210 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:52.243667 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:52.243683 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:52.311033 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:52.302537   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.303096   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.304747   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.305085   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.306664   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:52.302537   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.303096   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.304747   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.305085   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.306664   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:52.311046 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:52.311057 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:52.372679 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:52.372703 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:54.903108 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:54.913373 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:54.913436 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:54.938658 1302865 cri.go:89] found id: ""
	I1213 14:58:54.938673 1302865 logs.go:282] 0 containers: []
	W1213 14:58:54.938680 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:54.938686 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:54.938753 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:54.962838 1302865 cri.go:89] found id: ""
	I1213 14:58:54.962851 1302865 logs.go:282] 0 containers: []
	W1213 14:58:54.962866 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:54.962871 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:54.962942 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:54.988758 1302865 cri.go:89] found id: ""
	I1213 14:58:54.988773 1302865 logs.go:282] 0 containers: []
	W1213 14:58:54.988780 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:54.988785 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:54.988855 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:55.021177 1302865 cri.go:89] found id: ""
	I1213 14:58:55.021192 1302865 logs.go:282] 0 containers: []
	W1213 14:58:55.021200 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:55.021206 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:55.021272 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:55.049330 1302865 cri.go:89] found id: ""
	I1213 14:58:55.049344 1302865 logs.go:282] 0 containers: []
	W1213 14:58:55.049356 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:55.049361 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:55.049421 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:55.079835 1302865 cri.go:89] found id: ""
	I1213 14:58:55.079849 1302865 logs.go:282] 0 containers: []
	W1213 14:58:55.079856 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:55.079861 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:55.079920 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:55.107073 1302865 cri.go:89] found id: ""
	I1213 14:58:55.107087 1302865 logs.go:282] 0 containers: []
	W1213 14:58:55.107094 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:55.107102 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:55.107112 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:55.165853 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:55.165871 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:55.183109 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:55.183127 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:55.251642 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:55.242691   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.243211   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.244929   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.245470   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.247181   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:55.242691   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.243211   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.244929   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.245470   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.247181   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:55.251652 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:55.251664 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:55.317380 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:55.317399 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:57.847271 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:57.857537 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:57.857603 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:57.882391 1302865 cri.go:89] found id: ""
	I1213 14:58:57.882405 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.882412 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:57.882417 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:57.882490 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:57.905909 1302865 cri.go:89] found id: ""
	I1213 14:58:57.905923 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.905943 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:57.905948 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:57.906018 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:57.930237 1302865 cri.go:89] found id: ""
	I1213 14:58:57.930252 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.930259 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:57.930264 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:57.930337 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:57.958985 1302865 cri.go:89] found id: ""
	I1213 14:58:57.959014 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.959020 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:57.959031 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:57.959099 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:57.983693 1302865 cri.go:89] found id: ""
	I1213 14:58:57.983707 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.983714 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:57.983719 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:57.983779 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:58.012155 1302865 cri.go:89] found id: ""
	I1213 14:58:58.012170 1302865 logs.go:282] 0 containers: []
	W1213 14:58:58.012178 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:58.012183 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:58.012250 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:58.043700 1302865 cri.go:89] found id: ""
	I1213 14:58:58.043714 1302865 logs.go:282] 0 containers: []
	W1213 14:58:58.043722 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:58.043730 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:58.043742 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:58.105070 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:58.105098 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:58.123698 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:58.123717 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:58.194632 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:58.186276   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.187012   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.188759   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.189247   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.190768   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:58.186276   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.187012   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.188759   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.189247   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.190768   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:58.194642 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:58.194653 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:58.256210 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:58.256230 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:00.787680 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:00.798261 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:00.798326 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:00.826895 1302865 cri.go:89] found id: ""
	I1213 14:59:00.826908 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.826915 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:00.826921 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:00.826980 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:00.851410 1302865 cri.go:89] found id: ""
	I1213 14:59:00.851424 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.851431 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:00.851437 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:00.851510 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:00.876891 1302865 cri.go:89] found id: ""
	I1213 14:59:00.876906 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.876912 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:00.876917 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:00.876975 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:00.900564 1302865 cri.go:89] found id: ""
	I1213 14:59:00.900578 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.900585 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:00.900589 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:00.900647 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:00.925560 1302865 cri.go:89] found id: ""
	I1213 14:59:00.925574 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.925581 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:00.925586 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:00.925647 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:00.954298 1302865 cri.go:89] found id: ""
	I1213 14:59:00.954311 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.954319 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:00.954330 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:00.954388 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:00.980684 1302865 cri.go:89] found id: ""
	I1213 14:59:00.980698 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.980704 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:00.980718 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:00.980731 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:01.048024 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:01.039594   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.040152   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.041748   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.042294   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.043863   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:01.039594   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.040152   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.041748   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.042294   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.043863   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:01.048033 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:01.048044 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:01.110723 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:01.110742 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:01.144966 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:01.144983 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:01.203272 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:01.203301 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:03.722770 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:03.733112 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:03.733170 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:03.761042 1302865 cri.go:89] found id: ""
	I1213 14:59:03.761057 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.761064 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:03.761069 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:03.761130 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:03.789429 1302865 cri.go:89] found id: ""
	I1213 14:59:03.789443 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.789450 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:03.789455 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:03.789521 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:03.816916 1302865 cri.go:89] found id: ""
	I1213 14:59:03.816930 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.816937 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:03.816942 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:03.817001 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:03.844301 1302865 cri.go:89] found id: ""
	I1213 14:59:03.844317 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.844324 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:03.844329 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:03.844388 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:03.873060 1302865 cri.go:89] found id: ""
	I1213 14:59:03.873075 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.873082 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:03.873087 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:03.873147 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:03.910513 1302865 cri.go:89] found id: ""
	I1213 14:59:03.910527 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.910534 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:03.910539 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:03.910601 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:03.938039 1302865 cri.go:89] found id: ""
	I1213 14:59:03.938053 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.938060 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:03.938067 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:03.938077 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:03.993458 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:03.993478 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:04.011140 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:04.011157 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:04.078339 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:04.069502   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.070272   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.072094   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.072604   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.074176   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:04.069502   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.070272   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.072094   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.072604   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.074176   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:04.078350 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:04.078361 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:04.142915 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:04.142934 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:06.673444 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:06.683643 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:06.683703 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:06.708707 1302865 cri.go:89] found id: ""
	I1213 14:59:06.708727 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.708734 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:06.708739 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:06.708799 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:06.734465 1302865 cri.go:89] found id: ""
	I1213 14:59:06.734479 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.734486 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:06.734495 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:06.734584 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:06.759590 1302865 cri.go:89] found id: ""
	I1213 14:59:06.759603 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.759610 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:06.759615 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:06.759674 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:06.785693 1302865 cri.go:89] found id: ""
	I1213 14:59:06.785706 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.785713 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:06.785720 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:06.785777 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:06.810125 1302865 cri.go:89] found id: ""
	I1213 14:59:06.810139 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.810146 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:06.810151 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:06.810215 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:06.835783 1302865 cri.go:89] found id: ""
	I1213 14:59:06.835797 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.835804 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:06.835809 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:06.835869 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:06.860909 1302865 cri.go:89] found id: ""
	I1213 14:59:06.860922 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.860929 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:06.860936 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:06.860946 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:06.916027 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:06.916047 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:06.933118 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:06.933135 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:06.997759 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:06.989536   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.990242   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.991821   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.992353   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.993901   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:06.989536   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.990242   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.991821   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.992353   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.993901   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:06.997769 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:06.997779 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:07.059939 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:07.059961 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:09.591076 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:09.601913 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:09.601975 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:09.626204 1302865 cri.go:89] found id: ""
	I1213 14:59:09.626218 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.626225 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:09.626230 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:09.626289 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:09.653443 1302865 cri.go:89] found id: ""
	I1213 14:59:09.653457 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.653463 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:09.653469 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:09.653531 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:09.678836 1302865 cri.go:89] found id: ""
	I1213 14:59:09.678851 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.678858 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:09.678865 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:09.678924 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:09.704492 1302865 cri.go:89] found id: ""
	I1213 14:59:09.704506 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.704514 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:09.704519 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:09.704581 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:09.733333 1302865 cri.go:89] found id: ""
	I1213 14:59:09.733355 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.733363 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:09.733368 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:09.733431 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:09.758847 1302865 cri.go:89] found id: ""
	I1213 14:59:09.758861 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.758869 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:09.758874 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:09.758946 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:09.785932 1302865 cri.go:89] found id: ""
	I1213 14:59:09.785946 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.785953 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:09.785962 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:09.785973 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:09.842054 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:09.842073 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:09.859249 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:09.859273 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:09.924527 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:09.916673   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.917242   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.918722   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.919219   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.920662   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:09.916673   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.917242   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.918722   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.919219   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.920662   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:09.924536 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:09.924546 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:09.987531 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:09.987550 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:12.517373 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:12.529230 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:12.529292 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:12.558354 1302865 cri.go:89] found id: ""
	I1213 14:59:12.558368 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.558375 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:12.558380 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:12.558439 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:12.585312 1302865 cri.go:89] found id: ""
	I1213 14:59:12.585326 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.585333 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:12.585338 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:12.585396 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:12.613481 1302865 cri.go:89] found id: ""
	I1213 14:59:12.613494 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.613501 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:12.613506 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:12.613564 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:12.636592 1302865 cri.go:89] found id: ""
	I1213 14:59:12.636614 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.636621 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:12.636627 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:12.636694 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:12.660499 1302865 cri.go:89] found id: ""
	I1213 14:59:12.660513 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.660520 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:12.660524 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:12.660591 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:12.684274 1302865 cri.go:89] found id: ""
	I1213 14:59:12.684297 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.684304 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:12.684309 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:12.684377 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:12.715959 1302865 cri.go:89] found id: ""
	I1213 14:59:12.715973 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.715980 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:12.715992 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:12.716003 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:12.779780 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:12.771561   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.772194   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.773845   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.774330   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.775929   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:12.771561   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.772194   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.773845   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.774330   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.775929   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:12.779790 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:12.779801 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:12.840858 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:12.840877 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:12.870238 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:12.870256 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:12.930596 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:12.930615 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:15.449328 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:15.460194 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:15.460255 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:15.484663 1302865 cri.go:89] found id: ""
	I1213 14:59:15.484677 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.484683 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:15.484689 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:15.484799 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:15.513604 1302865 cri.go:89] found id: ""
	I1213 14:59:15.513619 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.513626 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:15.513631 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:15.513692 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:15.543496 1302865 cri.go:89] found id: ""
	I1213 14:59:15.543510 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.543517 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:15.543524 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:15.543596 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:15.576119 1302865 cri.go:89] found id: ""
	I1213 14:59:15.576133 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.576140 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:15.576145 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:15.576207 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:15.600649 1302865 cri.go:89] found id: ""
	I1213 14:59:15.600663 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.600670 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:15.600675 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:15.600743 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:15.624956 1302865 cri.go:89] found id: ""
	I1213 14:59:15.624970 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.624977 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:15.624984 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:15.625045 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:15.649687 1302865 cri.go:89] found id: ""
	I1213 14:59:15.649700 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.649707 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:15.649717 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:15.649728 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:15.711417 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:15.711439 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:15.739859 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:15.739876 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:15.796008 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:15.796027 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:15.813254 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:15.813271 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:15.889756 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:15.881341   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.881741   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.883505   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.883986   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.885525   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:15.881341   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.881741   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.883505   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.883986   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.885525   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:18.390805 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:18.401397 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:18.401458 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:18.426479 1302865 cri.go:89] found id: ""
	I1213 14:59:18.426493 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.426501 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:18.426507 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:18.426569 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:18.451763 1302865 cri.go:89] found id: ""
	I1213 14:59:18.451777 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.451784 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:18.451788 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:18.451846 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:18.475994 1302865 cri.go:89] found id: ""
	I1213 14:59:18.476008 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.476015 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:18.476020 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:18.476080 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:18.500350 1302865 cri.go:89] found id: ""
	I1213 14:59:18.500363 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.500371 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:18.500376 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:18.500436 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:18.524126 1302865 cri.go:89] found id: ""
	I1213 14:59:18.524178 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.524186 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:18.524191 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:18.524251 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:18.552637 1302865 cri.go:89] found id: ""
	I1213 14:59:18.552650 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.552657 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:18.552668 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:18.552735 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:18.576409 1302865 cri.go:89] found id: ""
	I1213 14:59:18.576423 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.576430 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:18.576437 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:18.576448 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:18.632727 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:18.632750 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:18.649857 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:18.649874 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:18.717909 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:18.709647   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.710444   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.712120   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.712687   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.714255   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:18.709647   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.710444   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.712120   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.712687   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.714255   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:18.717920 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:18.717930 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:18.779709 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:18.779731 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:21.307289 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:21.317675 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:21.317738 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:21.357856 1302865 cri.go:89] found id: ""
	I1213 14:59:21.357870 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.357886 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:21.357892 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:21.357952 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:21.383442 1302865 cri.go:89] found id: ""
	I1213 14:59:21.383456 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.383478 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:21.383483 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:21.383550 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:21.410523 1302865 cri.go:89] found id: ""
	I1213 14:59:21.410537 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.410544 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:21.410549 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:21.410606 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:21.437275 1302865 cri.go:89] found id: ""
	I1213 14:59:21.437289 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.437296 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:21.437303 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:21.437361 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:21.460786 1302865 cri.go:89] found id: ""
	I1213 14:59:21.460800 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.460807 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:21.460813 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:21.460871 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:21.484394 1302865 cri.go:89] found id: ""
	I1213 14:59:21.484409 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.484416 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:21.484422 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:21.484481 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:21.513384 1302865 cri.go:89] found id: ""
	I1213 14:59:21.513398 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.513405 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:21.513413 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:21.513423 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:21.568892 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:21.568912 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:21.586837 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:21.586854 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:21.662678 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:21.654029   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.654719   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.656490   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.657142   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.658705   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:21.654029   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.654719   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.656490   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.657142   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.658705   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:21.662688 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:21.662699 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:21.736289 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:21.736318 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:24.267273 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:24.277337 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:24.277401 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:24.300799 1302865 cri.go:89] found id: ""
	I1213 14:59:24.300813 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.300820 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:24.300825 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:24.300883 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:24.329119 1302865 cri.go:89] found id: ""
	I1213 14:59:24.329133 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.329140 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:24.329145 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:24.329207 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:24.359906 1302865 cri.go:89] found id: ""
	I1213 14:59:24.359920 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.359927 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:24.359934 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:24.359993 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:24.388174 1302865 cri.go:89] found id: ""
	I1213 14:59:24.388188 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.388195 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:24.388201 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:24.388265 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:24.416221 1302865 cri.go:89] found id: ""
	I1213 14:59:24.416235 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.416242 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:24.416247 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:24.416306 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:24.441358 1302865 cri.go:89] found id: ""
	I1213 14:59:24.441373 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.441380 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:24.441385 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:24.441444 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:24.465868 1302865 cri.go:89] found id: ""
	I1213 14:59:24.465882 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.465889 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:24.465897 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:24.465907 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:24.522170 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:24.522189 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:24.539720 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:24.539741 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:24.605986 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:24.597621   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.598252   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.599831   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.600201   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.601630   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:24.597621   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.598252   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.599831   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.600201   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.601630   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:24.605996 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:24.606007 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:24.667358 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:24.667377 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:27.195225 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:27.205377 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:27.205438 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:27.229665 1302865 cri.go:89] found id: ""
	I1213 14:59:27.229679 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.229686 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:27.229692 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:27.229755 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:27.253927 1302865 cri.go:89] found id: ""
	I1213 14:59:27.253943 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.253950 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:27.253961 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:27.254022 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:27.277865 1302865 cri.go:89] found id: ""
	I1213 14:59:27.277879 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.277886 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:27.277891 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:27.277949 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:27.305956 1302865 cri.go:89] found id: ""
	I1213 14:59:27.305969 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.305977 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:27.305982 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:27.306041 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:27.330227 1302865 cri.go:89] found id: ""
	I1213 14:59:27.330241 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.330248 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:27.330253 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:27.330312 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:27.367738 1302865 cri.go:89] found id: ""
	I1213 14:59:27.367752 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.367759 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:27.367764 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:27.367823 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:27.400224 1302865 cri.go:89] found id: ""
	I1213 14:59:27.400239 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.400254 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:27.400262 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:27.400272 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:27.428506 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:27.428525 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:27.484755 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:27.484775 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:27.501783 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:27.501800 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:27.568006 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:27.559400   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.559958   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.561857   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.562433   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.564051   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:27.559400   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.559958   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.561857   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.562433   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.564051   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:27.568017 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:27.568029 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:30.130924 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:30.142124 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:30.142187 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:30.168272 1302865 cri.go:89] found id: ""
	I1213 14:59:30.168286 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.168301 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:30.168306 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:30.168379 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:30.198491 1302865 cri.go:89] found id: ""
	I1213 14:59:30.198507 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.198515 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:30.198520 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:30.198583 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:30.224307 1302865 cri.go:89] found id: ""
	I1213 14:59:30.224321 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.224329 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:30.224334 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:30.224398 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:30.252127 1302865 cri.go:89] found id: ""
	I1213 14:59:30.252142 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.252150 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:30.252155 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:30.252216 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:30.277686 1302865 cri.go:89] found id: ""
	I1213 14:59:30.277700 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.277707 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:30.277712 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:30.277773 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:30.302751 1302865 cri.go:89] found id: ""
	I1213 14:59:30.302766 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.302773 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:30.302779 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:30.302864 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:30.331699 1302865 cri.go:89] found id: ""
	I1213 14:59:30.331713 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.331720 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:30.331727 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:30.331741 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:30.384091 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:30.384107 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:30.448178 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:30.448197 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:30.465395 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:30.465414 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:30.525911 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:30.518056   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.518498   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.519701   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.520227   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.521907   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:30.518056   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.518498   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.519701   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.520227   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.521907   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:30.525921 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:30.525931 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:33.088366 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:33.098677 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:33.098747 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:33.123559 1302865 cri.go:89] found id: ""
	I1213 14:59:33.123574 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.123581 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:33.123587 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:33.123648 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:33.149199 1302865 cri.go:89] found id: ""
	I1213 14:59:33.149214 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.149221 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:33.149231 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:33.149294 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:33.174660 1302865 cri.go:89] found id: ""
	I1213 14:59:33.174674 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.174681 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:33.174686 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:33.174747 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:33.199686 1302865 cri.go:89] found id: ""
	I1213 14:59:33.199701 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.199709 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:33.199714 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:33.199776 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:33.223975 1302865 cri.go:89] found id: ""
	I1213 14:59:33.223990 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.223997 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:33.224002 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:33.224062 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:33.248004 1302865 cri.go:89] found id: ""
	I1213 14:59:33.248019 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.248026 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:33.248032 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:33.248099 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:33.272806 1302865 cri.go:89] found id: ""
	I1213 14:59:33.272821 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.272829 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:33.272837 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:33.272847 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:33.300705 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:33.300722 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:33.363767 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:33.363786 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:33.382421 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:33.382440 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:33.450503 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:33.442115   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.442703   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.444236   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.444803   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.446325   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:33.442115   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.442703   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.444236   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.444803   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.446325   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:33.450514 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:33.450526 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:36.015724 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:36.026901 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:36.026965 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:36.053629 1302865 cri.go:89] found id: ""
	I1213 14:59:36.053645 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.053653 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:36.053658 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:36.053722 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:36.080154 1302865 cri.go:89] found id: ""
	I1213 14:59:36.080170 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.080177 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:36.080183 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:36.080247 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:36.105197 1302865 cri.go:89] found id: ""
	I1213 14:59:36.105212 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.105219 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:36.105224 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:36.105284 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:36.129426 1302865 cri.go:89] found id: ""
	I1213 14:59:36.129440 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.129453 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:36.129458 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:36.129516 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:36.157680 1302865 cri.go:89] found id: ""
	I1213 14:59:36.157695 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.157702 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:36.157707 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:36.157768 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:36.186306 1302865 cri.go:89] found id: ""
	I1213 14:59:36.186320 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.186327 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:36.186333 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:36.186404 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:36.210490 1302865 cri.go:89] found id: ""
	I1213 14:59:36.210504 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.210511 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:36.210518 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:36.210528 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:36.265225 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:36.265244 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:36.282625 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:36.282641 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:36.356056 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:36.344168   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.345304   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.346780   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.347080   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.348284   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:36.344168   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.345304   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.346780   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.347080   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.348284   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:36.356066 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:36.356078 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:36.426572 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:36.426595 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:38.953386 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:38.964071 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:38.964149 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:38.987398 1302865 cri.go:89] found id: ""
	I1213 14:59:38.987412 1302865 logs.go:282] 0 containers: []
	W1213 14:59:38.987420 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:38.987426 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:38.987501 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:39.014333 1302865 cri.go:89] found id: ""
	I1213 14:59:39.014348 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.014355 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:39.014360 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:39.014425 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:39.041685 1302865 cri.go:89] found id: ""
	I1213 14:59:39.041699 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.041706 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:39.041711 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:39.041773 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:39.065151 1302865 cri.go:89] found id: ""
	I1213 14:59:39.065165 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.065172 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:39.065177 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:39.065236 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:39.089601 1302865 cri.go:89] found id: ""
	I1213 14:59:39.089614 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.089621 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:39.089629 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:39.089695 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:39.114392 1302865 cri.go:89] found id: ""
	I1213 14:59:39.114406 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.114413 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:39.114418 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:39.114479 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:39.139175 1302865 cri.go:89] found id: ""
	I1213 14:59:39.139188 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.139195 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:39.139204 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:39.139214 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:39.194900 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:39.194920 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:39.212516 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:39.212534 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:39.278353 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:39.270327   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.270899   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.272513   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.272875   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.274310   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:39.270327   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.270899   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.272513   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.272875   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.274310   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:39.278363 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:39.278376 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:39.339218 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:39.339237 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:41.878578 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:41.888870 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:41.888930 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:41.916325 1302865 cri.go:89] found id: ""
	I1213 14:59:41.916339 1302865 logs.go:282] 0 containers: []
	W1213 14:59:41.916346 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:41.916352 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:41.916408 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:41.940631 1302865 cri.go:89] found id: ""
	I1213 14:59:41.940646 1302865 logs.go:282] 0 containers: []
	W1213 14:59:41.940653 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:41.940658 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:41.940721 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:41.964819 1302865 cri.go:89] found id: ""
	I1213 14:59:41.964835 1302865 logs.go:282] 0 containers: []
	W1213 14:59:41.964842 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:41.964847 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:41.964909 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:41.992880 1302865 cri.go:89] found id: ""
	I1213 14:59:41.992895 1302865 logs.go:282] 0 containers: []
	W1213 14:59:41.992902 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:41.992907 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:41.992966 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:42.037181 1302865 cri.go:89] found id: ""
	I1213 14:59:42.037196 1302865 logs.go:282] 0 containers: []
	W1213 14:59:42.037203 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:42.037208 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:42.037272 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:42.066224 1302865 cri.go:89] found id: ""
	I1213 14:59:42.066240 1302865 logs.go:282] 0 containers: []
	W1213 14:59:42.066247 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:42.066253 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:42.066324 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:42.113241 1302865 cri.go:89] found id: ""
	I1213 14:59:42.113259 1302865 logs.go:282] 0 containers: []
	W1213 14:59:42.113267 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:42.113275 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:42.113288 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:42.174660 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:42.174686 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:42.197359 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:42.197391 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:42.287788 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:42.278004   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.278734   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.280708   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.281502   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.282312   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:42.278004   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.278734   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.280708   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.281502   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.282312   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:42.287799 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:42.287810 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:42.353033 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:42.353052 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:44.892059 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:44.902815 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:44.902875 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:44.927725 1302865 cri.go:89] found id: ""
	I1213 14:59:44.927740 1302865 logs.go:282] 0 containers: []
	W1213 14:59:44.927747 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:44.927752 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:44.927815 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:44.957287 1302865 cri.go:89] found id: ""
	I1213 14:59:44.957301 1302865 logs.go:282] 0 containers: []
	W1213 14:59:44.957308 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:44.957313 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:44.957371 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:44.982138 1302865 cri.go:89] found id: ""
	I1213 14:59:44.982153 1302865 logs.go:282] 0 containers: []
	W1213 14:59:44.982160 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:44.982166 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:44.982225 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:45.025671 1302865 cri.go:89] found id: ""
	I1213 14:59:45.025689 1302865 logs.go:282] 0 containers: []
	W1213 14:59:45.025697 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:45.025704 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:45.025777 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:45.070096 1302865 cri.go:89] found id: ""
	I1213 14:59:45.070112 1302865 logs.go:282] 0 containers: []
	W1213 14:59:45.070121 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:45.070126 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:45.070203 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:45.113264 1302865 cri.go:89] found id: ""
	I1213 14:59:45.113281 1302865 logs.go:282] 0 containers: []
	W1213 14:59:45.113289 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:45.113302 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:45.113391 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:45.146027 1302865 cri.go:89] found id: ""
	I1213 14:59:45.146050 1302865 logs.go:282] 0 containers: []
	W1213 14:59:45.146058 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:45.146073 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:45.146084 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:45.242018 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:45.242086 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:45.278598 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:45.278619 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:45.377053 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:45.367099   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.369078   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.369934   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.371774   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.372065   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:45.367099   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.369078   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.369934   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.371774   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.372065   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:45.377063 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:45.377073 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:45.449162 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:45.449183 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:47.980927 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:47.991934 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:47.991998 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:48.022075 1302865 cri.go:89] found id: ""
	I1213 14:59:48.022091 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.022098 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:48.022103 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:48.022169 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:48.052438 1302865 cri.go:89] found id: ""
	I1213 14:59:48.052454 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.052461 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:48.052466 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:48.052543 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:48.077918 1302865 cri.go:89] found id: ""
	I1213 14:59:48.077932 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.077940 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:48.077945 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:48.078008 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:48.107677 1302865 cri.go:89] found id: ""
	I1213 14:59:48.107691 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.107698 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:48.107703 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:48.107803 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:48.134492 1302865 cri.go:89] found id: ""
	I1213 14:59:48.134506 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.134514 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:48.134523 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:48.134616 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:48.159260 1302865 cri.go:89] found id: ""
	I1213 14:59:48.159274 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.159281 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:48.159286 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:48.159368 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:48.184905 1302865 cri.go:89] found id: ""
	I1213 14:59:48.184920 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.184927 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:48.184935 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:48.184945 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:48.240512 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:48.240535 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:48.257663 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:48.257683 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:48.323284 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:48.314914   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.315437   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.317182   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.317842   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.319544   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:48.314914   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.315437   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.317182   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.317842   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.319544   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:48.323295 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:48.323306 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:48.393384 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:48.393403 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:50.925922 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:50.936831 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:50.936895 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:50.963232 1302865 cri.go:89] found id: ""
	I1213 14:59:50.963246 1302865 logs.go:282] 0 containers: []
	W1213 14:59:50.963253 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:50.963258 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:50.963354 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:50.993552 1302865 cri.go:89] found id: ""
	I1213 14:59:50.993566 1302865 logs.go:282] 0 containers: []
	W1213 14:59:50.993572 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:50.993578 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:50.993639 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:51.021945 1302865 cri.go:89] found id: ""
	I1213 14:59:51.021978 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.021986 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:51.021991 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:51.022051 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:51.049002 1302865 cri.go:89] found id: ""
	I1213 14:59:51.049017 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.049024 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:51.049029 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:51.049113 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:51.075979 1302865 cri.go:89] found id: ""
	I1213 14:59:51.075995 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.076003 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:51.076008 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:51.076071 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:51.101633 1302865 cri.go:89] found id: ""
	I1213 14:59:51.101648 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.101656 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:51.101661 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:51.101724 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:51.128983 1302865 cri.go:89] found id: ""
	I1213 14:59:51.128999 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.129007 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:51.129015 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:51.129025 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:51.185511 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:51.185538 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:51.203284 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:51.203306 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:51.265859 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:51.257020   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.257804   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.259431   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.260025   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.261672   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:51.257020   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.257804   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.259431   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.260025   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.261672   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:51.265869 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:51.265880 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:51.328096 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:51.328116 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:53.857136 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:53.867344 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:53.867405 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:53.890843 1302865 cri.go:89] found id: ""
	I1213 14:59:53.890857 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.890864 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:53.890869 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:53.890927 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:53.915236 1302865 cri.go:89] found id: ""
	I1213 14:59:53.915250 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.915258 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:53.915263 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:53.915341 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:53.939500 1302865 cri.go:89] found id: ""
	I1213 14:59:53.939515 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.939523 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:53.939528 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:53.939588 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:53.968671 1302865 cri.go:89] found id: ""
	I1213 14:59:53.968686 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.968693 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:53.968698 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:53.968766 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:53.992869 1302865 cri.go:89] found id: ""
	I1213 14:59:53.992883 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.992895 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:53.992900 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:53.992962 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:54.020494 1302865 cri.go:89] found id: ""
	I1213 14:59:54.020510 1302865 logs.go:282] 0 containers: []
	W1213 14:59:54.020518 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:54.020524 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:54.020587 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:54.047224 1302865 cri.go:89] found id: ""
	I1213 14:59:54.047240 1302865 logs.go:282] 0 containers: []
	W1213 14:59:54.047247 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:54.047256 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:54.047268 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:54.064625 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:54.064643 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:54.131051 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:54.122613   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.123186   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.124913   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.125456   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.126931   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:54.122613   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.123186   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.124913   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.125456   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.126931   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:54.131061 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:54.131072 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:54.198481 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:54.198502 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:54.229657 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:54.229673 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:56.788389 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:56.798893 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:56.798978 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:56.825463 1302865 cri.go:89] found id: ""
	I1213 14:59:56.825479 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.825486 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:56.825491 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:56.825569 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:56.850902 1302865 cri.go:89] found id: ""
	I1213 14:59:56.850916 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.850923 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:56.850928 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:56.850997 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:56.875729 1302865 cri.go:89] found id: ""
	I1213 14:59:56.875743 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.875750 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:56.875755 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:56.875812 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:56.904598 1302865 cri.go:89] found id: ""
	I1213 14:59:56.904612 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.904619 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:56.904624 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:56.904684 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:56.929612 1302865 cri.go:89] found id: ""
	I1213 14:59:56.929626 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.929633 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:56.929639 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:56.929696 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:56.954323 1302865 cri.go:89] found id: ""
	I1213 14:59:56.954337 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.954345 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:56.954350 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:56.954411 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:56.978916 1302865 cri.go:89] found id: ""
	I1213 14:59:56.978930 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.978937 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:56.978944 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:56.978955 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:56.996271 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:56.996290 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:57.067201 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:57.058255   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.058923   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.060482   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.061159   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.062082   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:57.058255   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.058923   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.060482   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.061159   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.062082   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:57.067214 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:57.067227 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:57.129467 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:57.129486 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:57.160756 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:57.160773 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:59.726541 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:59.737128 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:59.737192 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:59.762034 1302865 cri.go:89] found id: ""
	I1213 14:59:59.762050 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.762057 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:59.762063 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:59.762136 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:59.786710 1302865 cri.go:89] found id: ""
	I1213 14:59:59.786724 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.786731 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:59.786738 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:59.786799 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:59.823635 1302865 cri.go:89] found id: ""
	I1213 14:59:59.823649 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.823656 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:59.823661 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:59.823721 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:59.853555 1302865 cri.go:89] found id: ""
	I1213 14:59:59.853568 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.853576 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:59.853580 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:59.853639 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:59.878766 1302865 cri.go:89] found id: ""
	I1213 14:59:59.878781 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.878788 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:59.878793 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:59.878853 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:59.904985 1302865 cri.go:89] found id: ""
	I1213 14:59:59.904999 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.905006 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:59.905012 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:59.905084 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:59.929868 1302865 cri.go:89] found id: ""
	I1213 14:59:59.929882 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.929889 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:59.929896 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:59.929906 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:59.991222 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:59.991242 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:00:00.071719 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 15:00:00.071740 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:00:00.209914 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 15:00:00.209948 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:00:00.266871 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:00:00.266916 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:00:00.606023 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 15:00:00.575459   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.581155   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.582626   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.584864   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.585965   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 15:00:00.575459   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.581155   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.582626   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.584864   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.585965   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:00:03.107691 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:00:03.118897 1302865 kubeadm.go:602] duration metric: took 4m4.796487812s to restartPrimaryControlPlane
	W1213 15:00:03.118966 1302865 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 15:00:03.119044 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 15:00:03.535783 1302865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 15:00:03.550485 1302865 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 15:00:03.558915 1302865 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 15:00:03.558988 1302865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 15:00:03.567415 1302865 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 15:00:03.567426 1302865 kubeadm.go:158] found existing configuration files:
	
	I1213 15:00:03.567481 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 15:00:03.576037 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 15:00:03.576097 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 15:00:03.584074 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 15:00:03.592593 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 15:00:03.592651 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 15:00:03.601062 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 15:00:03.609623 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 15:00:03.609683 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 15:00:03.617551 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 15:00:03.625819 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 15:00:03.625879 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 15:00:03.634092 1302865 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 15:00:03.677773 1302865 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 15:00:03.677823 1302865 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 15:00:03.751455 1302865 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 15:00:03.751520 1302865 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 15:00:03.751555 1302865 kubeadm.go:319] OS: Linux
	I1213 15:00:03.751599 1302865 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 15:00:03.751646 1302865 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 15:00:03.751692 1302865 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 15:00:03.751738 1302865 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 15:00:03.751785 1302865 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 15:00:03.751832 1302865 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 15:00:03.751877 1302865 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 15:00:03.751923 1302865 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 15:00:03.751968 1302865 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 15:00:03.818698 1302865 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 15:00:03.818804 1302865 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 15:00:03.818894 1302865 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 15:00:03.825177 1302865 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 15:00:03.828382 1302865 out.go:252]   - Generating certificates and keys ...
	I1213 15:00:03.828484 1302865 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 15:00:03.828568 1302865 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 15:00:03.828657 1302865 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 15:00:03.828722 1302865 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 15:00:03.828813 1302865 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 15:00:03.828870 1302865 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 15:00:03.828941 1302865 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 15:00:03.829005 1302865 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 15:00:03.829084 1302865 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 15:00:03.829160 1302865 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 15:00:03.829199 1302865 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 15:00:03.829258 1302865 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 15:00:04.177571 1302865 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 15:00:04.342429 1302865 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 15:00:04.668058 1302865 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 15:00:04.760444 1302865 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 15:00:05.013305 1302865 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 15:00:05.014367 1302865 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 15:00:05.019071 1302865 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 15:00:05.022340 1302865 out.go:252]   - Booting up control plane ...
	I1213 15:00:05.022442 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 15:00:05.022520 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 15:00:05.022586 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 15:00:05.042894 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 15:00:05.043146 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 15:00:05.050754 1302865 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 15:00:05.051023 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 15:00:05.051065 1302865 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 15:00:05.191860 1302865 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 15:00:05.191979 1302865 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 15:04:05.190333 1302865 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000252344s
	I1213 15:04:05.190362 1302865 kubeadm.go:319] 
	I1213 15:04:05.190420 1302865 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 15:04:05.190453 1302865 kubeadm.go:319] 	- The kubelet is not running
	I1213 15:04:05.190557 1302865 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 15:04:05.190562 1302865 kubeadm.go:319] 
	I1213 15:04:05.190665 1302865 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 15:04:05.190696 1302865 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 15:04:05.190726 1302865 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 15:04:05.190729 1302865 kubeadm.go:319] 
	I1213 15:04:05.195506 1302865 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 15:04:05.195924 1302865 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 15:04:05.196033 1302865 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 15:04:05.196267 1302865 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 15:04:05.196271 1302865 kubeadm.go:319] 
	I1213 15:04:05.196339 1302865 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 15:04:05.196471 1302865 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000252344s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 15:04:05.196557 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 15:04:05.613572 1302865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 15:04:05.627532 1302865 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 15:04:05.627586 1302865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 15:04:05.635470 1302865 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 15:04:05.635487 1302865 kubeadm.go:158] found existing configuration files:
	
	I1213 15:04:05.635549 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 15:04:05.643770 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 15:04:05.643832 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 15:04:05.651305 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 15:04:05.659066 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 15:04:05.659119 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 15:04:05.666497 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 15:04:05.674867 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 15:04:05.674922 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 15:04:05.682604 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 15:04:05.690488 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 15:04:05.690547 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 15:04:05.697863 1302865 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 15:04:05.737903 1302865 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 15:04:05.738332 1302865 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 15:04:05.824821 1302865 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 15:04:05.824881 1302865 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 15:04:05.824914 1302865 kubeadm.go:319] OS: Linux
	I1213 15:04:05.824955 1302865 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 15:04:05.825000 1302865 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 15:04:05.825043 1302865 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 15:04:05.825103 1302865 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 15:04:05.825147 1302865 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 15:04:05.825200 1302865 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 15:04:05.825250 1302865 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 15:04:05.825294 1302865 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 15:04:05.825336 1302865 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 15:04:05.892296 1302865 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 15:04:05.892418 1302865 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 15:04:05.892526 1302865 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 15:04:05.898143 1302865 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 15:04:05.903540 1302865 out.go:252]   - Generating certificates and keys ...
	I1213 15:04:05.903629 1302865 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 15:04:05.903698 1302865 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 15:04:05.903775 1302865 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 15:04:05.903837 1302865 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 15:04:05.903908 1302865 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 15:04:05.903958 1302865 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 15:04:05.904021 1302865 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 15:04:05.904084 1302865 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 15:04:05.904160 1302865 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 15:04:05.904234 1302865 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 15:04:05.904275 1302865 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 15:04:05.904330 1302865 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 15:04:05.992570 1302865 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 15:04:06.166280 1302865 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 15:04:06.244452 1302865 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 15:04:06.386969 1302865 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 15:04:06.630629 1302865 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 15:04:06.631865 1302865 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 15:04:06.635872 1302865 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 15:04:06.639278 1302865 out.go:252]   - Booting up control plane ...
	I1213 15:04:06.639389 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 15:04:06.639462 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 15:04:06.639523 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 15:04:06.659049 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 15:04:06.659158 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 15:04:06.666661 1302865 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 15:04:06.666977 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 15:04:06.667151 1302865 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 15:04:06.810085 1302865 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 15:04:06.810198 1302865 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 15:08:06.809904 1302865 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000225024s
	I1213 15:08:06.809924 1302865 kubeadm.go:319] 
	I1213 15:08:06.810412 1302865 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 15:08:06.810499 1302865 kubeadm.go:319] 	- The kubelet is not running
	I1213 15:08:06.810921 1302865 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 15:08:06.810931 1302865 kubeadm.go:319] 
	I1213 15:08:06.811146 1302865 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 15:08:06.811211 1302865 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 15:08:06.811291 1302865 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 15:08:06.811302 1302865 kubeadm.go:319] 
	I1213 15:08:06.814720 1302865 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 15:08:06.816724 1302865 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 15:08:06.816881 1302865 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 15:08:06.817212 1302865 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 15:08:06.817216 1302865 kubeadm.go:319] 
	I1213 15:08:06.817309 1302865 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 15:08:06.817355 1302865 kubeadm.go:403] duration metric: took 12m8.532180676s to StartCluster
	I1213 15:08:06.817385 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:08:06.817448 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:08:06.841821 1302865 cri.go:89] found id: ""
	I1213 15:08:06.841835 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.841841 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 15:08:06.841847 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:08:06.841909 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:08:06.865102 1302865 cri.go:89] found id: ""
	I1213 15:08:06.865122 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.865129 1302865 logs.go:284] No container was found matching "etcd"
	I1213 15:08:06.865134 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:08:06.865194 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:08:06.889354 1302865 cri.go:89] found id: ""
	I1213 15:08:06.889369 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.889376 1302865 logs.go:284] No container was found matching "coredns"
	I1213 15:08:06.889381 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:08:06.889444 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:08:06.916987 1302865 cri.go:89] found id: ""
	I1213 15:08:06.917001 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.917008 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 15:08:06.917014 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:08:06.917074 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:08:06.941966 1302865 cri.go:89] found id: ""
	I1213 15:08:06.941980 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.941987 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:08:06.941992 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:08:06.942053 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:08:06.967555 1302865 cri.go:89] found id: ""
	I1213 15:08:06.967570 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.967576 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 15:08:06.967582 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:08:06.967642 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:08:06.990643 1302865 cri.go:89] found id: ""
	I1213 15:08:06.990661 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.990669 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 15:08:06.990677 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 15:08:06.990688 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:08:07.046948 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 15:08:07.046967 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:08:07.064271 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:08:07.064292 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:08:07.156681 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 15:08:07.142614   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.149501   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.150219   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.151350   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.152858   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 15:08:07.142614   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.149501   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.150219   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.151350   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.152858   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:08:07.156693 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 15:08:07.156703 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:08:07.225180 1302865 logs.go:123] Gathering logs for container status ...
	I1213 15:08:07.225205 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 15:08:07.257292 1302865 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000225024s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 15:08:07.257342 1302865 out.go:285] * 
	W1213 15:08:07.257449 1302865 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000225024s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 15:08:07.257519 1302865 out.go:285] * 
	W1213 15:08:07.259853 1302865 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 15:08:07.265906 1302865 out.go:203] 
	W1213 15:08:07.268865 1302865 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000225024s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 15:08:07.268911 1302865 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 15:08:07.268933 1302865 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 15:08:07.272012 1302865 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371055694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371071185Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371111471Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371124460Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371134322Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371145407Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371154235Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371164894Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371186333Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371215091Z" level=info msg="Connect containerd service"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371566107Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.372148338Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.392820866Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.392994105Z" level=info msg="Start subscribing containerd event"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.393210215Z" level=info msg="Start recovering state"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.393152477Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.438865616Z" level=info msg="Start event monitor"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.439053460Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.439140720Z" level=info msg="Start streaming server"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.439202880Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.439258526Z" level=info msg="runtime interface starting up..."
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.439350397Z" level=info msg="starting plugins..."
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.439418867Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 14:55:56 functional-562018 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.441778888Z" level=info msg="containerd successfully booted in 0.092313s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 15:08:08.484106   21069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:08.484738   21069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:08.486274   21069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:08.486731   21069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:08.488227   21069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.759368] overlayfs: idmapped layers are currently not supported
	[Dec13 13:37] overlayfs: idmapped layers are currently not supported
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 15:08:08 up  6:50,  0 user,  load average: 0.01, 0.13, 0.43
	Linux functional-562018 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 15:08:04 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:08:05 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 13 15:08:05 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:08:05 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:08:05 functional-562018 kubelet[20873]: E1213 15:08:05.626033   20873 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:08:05 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:08:05 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:08:06 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 13 15:08:06 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:08:06 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:08:06 functional-562018 kubelet[20879]: E1213 15:08:06.386904   20879 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:08:06 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:08:06 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:08:07 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 13 15:08:07 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:08:07 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:08:07 functional-562018 kubelet[20953]: E1213 15:08:07.149200   20953 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:08:07 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:08:07 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:08:07 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 13 15:08:07 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:08:07 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:08:07 functional-562018 kubelet[20988]: E1213 15:08:07.907124   20988 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:08:07 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:08:07 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018: exit status 2 (344.087838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-562018" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (735.79s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-562018 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-562018 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (65.126326ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-562018 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-562018
helpers_test.go:244: (dbg) docker inspect functional-562018:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
	        "Created": "2025-12-13T14:41:15.451086653Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1291703,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T14:41:15.527927053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hostname",
	        "HostsPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hosts",
	        "LogPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648-json.log",
	        "Name": "/functional-562018",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-562018:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-562018",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
	                "LowerDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-562018",
	                "Source": "/var/lib/docker/volumes/functional-562018/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-562018",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-562018",
	                "name.minikube.sigs.k8s.io": "functional-562018",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f4b22297a29553cdd0dbc4eaa766abcdb1e67465ee18f1e2f5f9917dc8cf6d08",
	            "SandboxKey": "/var/run/docker/netns/f4b22297a295",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33918"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33919"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33922"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33920"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33921"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-562018": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:f3:95:ff:30:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd2e7aed753870546b4bccdeed8073ad5795c14334e906d06b964f72dc448c38",
	                    "EndpointID": "a0e947c1e40f773105c811b67b7d1d63f19d3a20060380bbde944bf9bfe39be5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-562018",
	                        "2cd1277ca783"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018: exit status 2 (320.188634ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-562018 logs -n 25: (1.016084601s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-831661 image ls --format short --alsologtostderr                                                                                             │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image   │ functional-831661 image ls --format table --alsologtostderr                                                                                             │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ ssh     │ functional-831661 ssh pgrep buildkitd                                                                                                                   │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │                     │
	│ image   │ functional-831661 image ls --format yaml --alsologtostderr                                                                                              │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image   │ functional-831661 image build -t localhost/my-image:functional-831661 testdata/build --alsologtostderr                                                  │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ image   │ functional-831661 image ls                                                                                                                              │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ delete  │ -p functional-831661                                                                                                                                    │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
	│ start   │ -p functional-562018 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │                     │
	│ start   │ -p functional-562018 --alsologtostderr -v=8                                                                                                             │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:49 UTC │                     │
	│ cache   │ functional-562018 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ functional-562018 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ functional-562018 cache add registry.k8s.io/pause:latest                                                                                                │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ functional-562018 cache add minikube-local-cache-test:functional-562018                                                                                 │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ functional-562018 cache delete minikube-local-cache-test:functional-562018                                                                              │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ ssh     │ functional-562018 ssh sudo crictl images                                                                                                                │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ ssh     │ functional-562018 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ ssh     │ functional-562018 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │                     │
	│ cache   │ functional-562018 cache reload                                                                                                                          │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ ssh     │ functional-562018 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ kubectl │ functional-562018 kubectl -- --context functional-562018 get pods                                                                                       │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │                     │
	│ start   │ -p functional-562018 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 14:55:53
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 14:55:53.719613 1302865 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:55:53.719728 1302865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:55:53.719732 1302865 out.go:374] Setting ErrFile to fd 2...
	I1213 14:55:53.719735 1302865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:55:53.719985 1302865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 14:55:53.720335 1302865 out.go:368] Setting JSON to false
	I1213 14:55:53.721190 1302865 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23903,"bootTime":1765613851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 14:55:53.721260 1302865 start.go:143] virtualization:  
	I1213 14:55:53.724694 1302865 out.go:179] * [functional-562018] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 14:55:53.728380 1302865 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 14:55:53.728496 1302865 notify.go:221] Checking for updates...
	I1213 14:55:53.734124 1302865 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:55:53.736928 1302865 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:55:53.739728 1302865 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 14:55:53.742545 1302865 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 14:55:53.745302 1302865 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 14:55:53.748618 1302865 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:55:53.748719 1302865 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:55:53.782535 1302865 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 14:55:53.782649 1302865 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:55:53.845662 1302865 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 14:55:53.829246857 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:55:53.845758 1302865 docker.go:319] overlay module found
	I1213 14:55:53.849849 1302865 out.go:179] * Using the docker driver based on existing profile
	I1213 14:55:53.852762 1302865 start.go:309] selected driver: docker
	I1213 14:55:53.852774 1302865 start.go:927] validating driver "docker" against &{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:55:53.852875 1302865 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 14:55:53.852984 1302865 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:55:53.929886 1302865 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 14:55:53.921020705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:55:53.930294 1302865 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 14:55:53.930319 1302865 cni.go:84] Creating CNI manager for ""
	I1213 14:55:53.930367 1302865 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 14:55:53.930406 1302865 start.go:353] cluster config:
	{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:55:53.933662 1302865 out.go:179] * Starting "functional-562018" primary control-plane node in "functional-562018" cluster
	I1213 14:55:53.936743 1302865 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 14:55:53.939760 1302865 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 14:55:53.942676 1302865 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 14:55:53.942716 1302865 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 14:55:53.942732 1302865 cache.go:65] Caching tarball of preloaded images
	I1213 14:55:53.942759 1302865 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 14:55:53.942845 1302865 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 14:55:53.942855 1302865 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 14:55:53.942970 1302865 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/config.json ...
	I1213 14:55:53.962568 1302865 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 14:55:53.962579 1302865 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 14:55:53.962597 1302865 cache.go:243] Successfully downloaded all kic artifacts
	I1213 14:55:53.962628 1302865 start.go:360] acquireMachinesLock for functional-562018: {Name:mk6a7956e4fce5d8e0f4d6fe039ab67ad6cd688b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 14:55:53.962689 1302865 start.go:364] duration metric: took 45.029µs to acquireMachinesLock for "functional-562018"
	I1213 14:55:53.962707 1302865 start.go:96] Skipping create...Using existing machine configuration
	I1213 14:55:53.962711 1302865 fix.go:54] fixHost starting: 
	I1213 14:55:53.962972 1302865 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:55:53.980087 1302865 fix.go:112] recreateIfNeeded on functional-562018: state=Running err=<nil>
	W1213 14:55:53.980106 1302865 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 14:55:53.983261 1302865 out.go:252] * Updating the running docker "functional-562018" container ...
	I1213 14:55:53.983285 1302865 machine.go:94] provisionDockerMachine start ...
	I1213 14:55:53.983388 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.000833 1302865 main.go:143] libmachine: Using SSH client type: native
	I1213 14:55:54.001170 1302865 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:55:54.001177 1302865 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 14:55:54.155013 1302865 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-562018
	
	I1213 14:55:54.155027 1302865 ubuntu.go:182] provisioning hostname "functional-562018"
	I1213 14:55:54.155091 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.172804 1302865 main.go:143] libmachine: Using SSH client type: native
	I1213 14:55:54.173100 1302865 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:55:54.173108 1302865 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-562018 && echo "functional-562018" | sudo tee /etc/hostname
	I1213 14:55:54.335232 1302865 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-562018
	
	I1213 14:55:54.335302 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.353315 1302865 main.go:143] libmachine: Using SSH client type: native
	I1213 14:55:54.353625 1302865 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:55:54.353638 1302865 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-562018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-562018/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-562018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 14:55:54.503602 1302865 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 14:55:54.503618 1302865 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 14:55:54.503648 1302865 ubuntu.go:190] setting up certificates
	I1213 14:55:54.503664 1302865 provision.go:84] configureAuth start
	I1213 14:55:54.503732 1302865 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
	I1213 14:55:54.520737 1302865 provision.go:143] copyHostCerts
	I1213 14:55:54.520806 1302865 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 14:55:54.520813 1302865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 14:55:54.520892 1302865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 14:55:54.520992 1302865 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 14:55:54.520996 1302865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 14:55:54.521022 1302865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 14:55:54.521079 1302865 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 14:55:54.521082 1302865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 14:55:54.521105 1302865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 14:55:54.521157 1302865 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.functional-562018 san=[127.0.0.1 192.168.49.2 functional-562018 localhost minikube]
	I1213 14:55:54.737947 1302865 provision.go:177] copyRemoteCerts
	I1213 14:55:54.738007 1302865 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 14:55:54.738047 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.756271 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:54.864730 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 14:55:54.885080 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 14:55:54.903456 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 14:55:54.921228 1302865 provision.go:87] duration metric: took 417.552003ms to configureAuth
	I1213 14:55:54.921245 1302865 ubuntu.go:206] setting minikube options for container-runtime
	I1213 14:55:54.921445 1302865 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:55:54.921451 1302865 machine.go:97] duration metric: took 938.161957ms to provisionDockerMachine
	I1213 14:55:54.921458 1302865 start.go:293] postStartSetup for "functional-562018" (driver="docker")
	I1213 14:55:54.921469 1302865 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 14:55:54.921526 1302865 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 14:55:54.921569 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.939146 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:55.043619 1302865 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 14:55:55.047116 1302865 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 14:55:55.047136 1302865 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 14:55:55.047147 1302865 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 14:55:55.047201 1302865 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 14:55:55.047279 1302865 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 14:55:55.047377 1302865 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts -> hosts in /etc/test/nested/copy/1252934
	I1213 14:55:55.047422 1302865 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1252934
	I1213 14:55:55.055022 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 14:55:55.072651 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts --> /etc/test/nested/copy/1252934/hosts (40 bytes)
	I1213 14:55:55.090146 1302865 start.go:296] duration metric: took 168.672467ms for postStartSetup
	I1213 14:55:55.090222 1302865 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 14:55:55.090277 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:55.110519 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:55.212743 1302865 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 14:55:55.217665 1302865 fix.go:56] duration metric: took 1.254946074s for fixHost
	I1213 14:55:55.217694 1302865 start.go:83] releasing machines lock for "functional-562018", held for 1.254985507s
	I1213 14:55:55.217771 1302865 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
	I1213 14:55:55.234536 1302865 ssh_runner.go:195] Run: cat /version.json
	I1213 14:55:55.234580 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:55.234841 1302865 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 14:55:55.234904 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:55.258034 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:55.263005 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:55.363489 1302865 ssh_runner.go:195] Run: systemctl --version
	I1213 14:55:55.466608 1302865 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 14:55:55.470983 1302865 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 14:55:55.471044 1302865 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 14:55:55.478685 1302865 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 14:55:55.478700 1302865 start.go:496] detecting cgroup driver to use...
	I1213 14:55:55.478730 1302865 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 14:55:55.478776 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 14:55:55.494349 1302865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 14:55:55.507276 1302865 docker.go:218] disabling cri-docker service (if available) ...
	I1213 14:55:55.507360 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 14:55:55.523374 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 14:55:55.537388 1302865 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 14:55:55.656533 1302865 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 14:55:55.769801 1302865 docker.go:234] disabling docker service ...
	I1213 14:55:55.769857 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 14:55:55.784548 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 14:55:55.797129 1302865 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 14:55:55.915684 1302865 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 14:55:56.027646 1302865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 14:55:56.050399 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 14:55:56.066005 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 14:55:56.076093 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 14:55:56.085556 1302865 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 14:55:56.085627 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 14:55:56.094545 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 14:55:56.104197 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 14:55:56.114269 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 14:55:56.123172 1302865 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 14:55:56.132178 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 14:55:56.141074 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 14:55:56.150470 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 14:55:56.160063 1302865 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 14:55:56.167903 1302865 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 14:55:56.175659 1302865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:55:56.295844 1302865 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 14:55:56.441580 1302865 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 14:55:56.441654 1302865 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 14:55:56.445551 1302865 start.go:564] Will wait 60s for crictl version
	I1213 14:55:56.445607 1302865 ssh_runner.go:195] Run: which crictl
	I1213 14:55:56.449128 1302865 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 14:55:56.473587 1302865 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 14:55:56.473654 1302865 ssh_runner.go:195] Run: containerd --version
	I1213 14:55:56.493885 1302865 ssh_runner.go:195] Run: containerd --version
	I1213 14:55:56.518032 1302865 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 14:55:56.521077 1302865 cli_runner.go:164] Run: docker network inspect functional-562018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 14:55:56.537369 1302865 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 14:55:56.544433 1302865 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 14:55:56.547248 1302865 kubeadm.go:884] updating cluster {Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 14:55:56.547410 1302865 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 14:55:56.547500 1302865 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:55:56.572443 1302865 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 14:55:56.572458 1302865 containerd.go:534] Images already preloaded, skipping extraction
	I1213 14:55:56.572525 1302865 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:55:56.603700 1302865 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 14:55:56.603712 1302865 cache_images.go:86] Images are preloaded, skipping loading
	I1213 14:55:56.603718 1302865 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 14:55:56.603824 1302865 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-562018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 14:55:56.603888 1302865 ssh_runner.go:195] Run: sudo crictl info
	I1213 14:55:56.640969 1302865 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 14:55:56.640988 1302865 cni.go:84] Creating CNI manager for ""
	I1213 14:55:56.640997 1302865 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 14:55:56.641011 1302865 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 14:55:56.641033 1302865 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-562018 NodeName:functional-562018 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 14:55:56.641163 1302865 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-562018"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 14:55:56.641238 1302865 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 14:55:56.649442 1302865 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 14:55:56.649507 1302865 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 14:55:56.657006 1302865 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 14:55:56.669728 1302865 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 14:55:56.682334 1302865 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1213 14:55:56.694926 1302865 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 14:55:56.698838 1302865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:55:56.837238 1302865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:55:57.584722 1302865 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018 for IP: 192.168.49.2
	I1213 14:55:57.584733 1302865 certs.go:195] generating shared ca certs ...
	I1213 14:55:57.584753 1302865 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:55:57.584897 1302865 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 14:55:57.584947 1302865 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 14:55:57.584954 1302865 certs.go:257] generating profile certs ...
	I1213 14:55:57.585039 1302865 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key
	I1213 14:55:57.585090 1302865 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key.d0505aee
	I1213 14:55:57.585124 1302865 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key
	I1213 14:55:57.585235 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 14:55:57.585272 1302865 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 14:55:57.585280 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 14:55:57.585307 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 14:55:57.585330 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 14:55:57.585354 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 14:55:57.585399 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 14:55:57.591362 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 14:55:57.616349 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 14:55:57.635438 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 14:55:57.655371 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 14:55:57.672503 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 14:55:57.689594 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 14:55:57.706530 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 14:55:57.723556 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 14:55:57.740287 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 14:55:57.757304 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 14:55:57.774649 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 14:55:57.792687 1302865 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 14:55:57.805822 1302865 ssh_runner.go:195] Run: openssl version
	I1213 14:55:57.812225 1302865 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 14:55:57.819503 1302865 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 14:55:57.826726 1302865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 14:55:57.830446 1302865 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 14:55:57.830502 1302865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 14:55:57.871253 1302865 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 14:55:57.878814 1302865 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:55:57.886029 1302865 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 14:55:57.893560 1302865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:55:57.897283 1302865 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:55:57.897343 1302865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:55:57.938225 1302865 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 14:55:57.946132 1302865 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 14:55:57.953318 1302865 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 14:55:57.960779 1302865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 14:55:57.964616 1302865 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 14:55:57.964674 1302865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 14:55:58.013928 1302865 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 14:55:58.021993 1302865 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 14:55:58.026144 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 14:55:58.067380 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 14:55:58.114887 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 14:55:58.156572 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 14:55:58.199117 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 14:55:58.241809 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 14:55:58.285184 1302865 kubeadm.go:401] StartCluster: {Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:55:58.285266 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 14:55:58.285327 1302865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 14:55:58.314259 1302865 cri.go:89] found id: ""
	I1213 14:55:58.314322 1302865 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 14:55:58.322386 1302865 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 14:55:58.322396 1302865 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 14:55:58.322453 1302865 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 14:55:58.329880 1302865 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:55:58.330377 1302865 kubeconfig.go:125] found "functional-562018" server: "https://192.168.49.2:8441"
	I1213 14:55:58.331729 1302865 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 14:55:58.341644 1302865 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 14:41:23.876598830 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 14:55:56.689854034 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 14:55:58.341663 1302865 kubeadm.go:1161] stopping kube-system containers ...
	I1213 14:55:58.341678 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1213 14:55:58.341741 1302865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 14:55:58.374972 1302865 cri.go:89] found id: ""
	I1213 14:55:58.375050 1302865 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 14:55:58.396016 1302865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 14:55:58.404525 1302865 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 13 14:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 13 14:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 13 14:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 13 14:45 /etc/kubernetes/scheduler.conf
	
	I1213 14:55:58.404584 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 14:55:58.412946 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 14:55:58.420580 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:55:58.420635 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 14:55:58.428221 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 14:55:58.435971 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:55:58.436028 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 14:55:58.443530 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 14:55:58.451393 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:55:58.451448 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 14:55:58.458854 1302865 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 14:55:58.466605 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:55:58.520413 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:55:59.744405 1302865 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.223964216s)
	I1213 14:55:59.744467 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:55:59.946438 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:56:00.013725 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:56:00.113319 1302865 api_server.go:52] waiting for apiserver process to appear ...
	I1213 14:56:00.114955 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:00.613579 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:01.114177 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:01.613592 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:02.113571 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:02.613593 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:03.113840 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:03.613569 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:04.114249 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:04.613852 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:05.113537 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:05.613696 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:06.113540 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:06.614342 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:07.113785 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:07.613457 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:08.114283 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:08.613596 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:09.113597 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:09.614352 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:10.114532 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:10.613598 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:11.114365 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:11.614158 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:12.113539 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:12.613531 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:13.113591 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:13.613463 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:14.114527 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:14.614435 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:15.113510 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:15.614373 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:16.114388 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:16.613507 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:17.113567 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:17.614369 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:18.113844 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:18.613714 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:19.114404 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:19.614169 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:20.114541 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:20.613650 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:21.113498 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:21.613589 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:22.113591 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:22.613522 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:23.114240 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:23.614475 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:24.113893 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:24.614467 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:25.114531 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:25.613526 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:26.114346 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:26.614504 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:27.113518 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:27.614286 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:28.114181 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:28.613958 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:29.113601 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:29.614343 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:30.114309 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:30.614109 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:31.114271 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:31.613510 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:32.114261 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:32.614199 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:33.114060 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:33.614237 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:34.114371 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:34.614467 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:35.114182 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:35.613614 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:36.113542 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:36.614402 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:37.114233 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:37.613522 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:38.113599 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:38.613584 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:39.114045 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:39.613569 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:40.113521 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:40.613504 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:41.113503 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:41.614239 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:42.113697 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:42.614293 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:43.113591 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:43.614231 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:44.114413 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:44.614537 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:45.114187 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:45.613592 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:46.113667 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:46.613755 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:47.113597 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:47.614262 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:48.113463 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:48.613700 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:49.113578 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:49.614192 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:50.113501 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:50.613492 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:51.114160 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:51.613924 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:52.114491 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:52.613532 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:53.113608 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:53.613620 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:54.114432 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:54.614359 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:55.114461 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:55.614143 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:56.113587 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:56.614451 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:57.113619 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:57.613622 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:58.113547 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:58.614429 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:59.113617 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:59.613534 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:00.124126 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:00.124233 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:00.200982 1302865 cri.go:89] found id: ""
	I1213 14:57:00.201003 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.201011 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:00.201018 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:00.201100 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:00.237755 1302865 cri.go:89] found id: ""
	I1213 14:57:00.237770 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.237778 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:00.237783 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:00.237861 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:00.301679 1302865 cri.go:89] found id: ""
	I1213 14:57:00.301694 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.301702 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:00.301709 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:00.301778 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:00.347228 1302865 cri.go:89] found id: ""
	I1213 14:57:00.347243 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.347251 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:00.347256 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:00.347356 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:00.376454 1302865 cri.go:89] found id: ""
	I1213 14:57:00.376471 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.376479 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:00.376485 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:00.376555 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:00.408967 1302865 cri.go:89] found id: ""
	I1213 14:57:00.408982 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.408989 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:00.408995 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:00.409059 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:00.437494 1302865 cri.go:89] found id: ""
	I1213 14:57:00.437509 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.437516 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:00.437524 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:00.437534 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:00.493840 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:00.493860 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:00.511767 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:00.511785 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:00.579231 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:00.570332   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.571045   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.572740   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.573345   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.575117   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:00.570332   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.571045   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.572740   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.573345   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.575117   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:00.579242 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:00.579253 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:00.641446 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:00.641467 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:03.171486 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:03.181873 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:03.181935 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:03.212211 1302865 cri.go:89] found id: ""
	I1213 14:57:03.212226 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.212232 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:03.212244 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:03.212304 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:03.237934 1302865 cri.go:89] found id: ""
	I1213 14:57:03.237949 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.237957 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:03.237962 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:03.238034 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:03.263822 1302865 cri.go:89] found id: ""
	I1213 14:57:03.263836 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.263843 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:03.263848 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:03.263910 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:03.289876 1302865 cri.go:89] found id: ""
	I1213 14:57:03.289890 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.289898 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:03.289902 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:03.289965 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:03.317957 1302865 cri.go:89] found id: ""
	I1213 14:57:03.317972 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.317979 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:03.318000 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:03.318060 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:03.346780 1302865 cri.go:89] found id: ""
	I1213 14:57:03.346793 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.346800 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:03.346805 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:03.346864 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:03.371472 1302865 cri.go:89] found id: ""
	I1213 14:57:03.371485 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.371493 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:03.371501 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:03.371512 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:03.399569 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:03.399588 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:03.454307 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:03.454327 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:03.472933 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:03.472951 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:03.538528 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:03.529465   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.530241   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.532007   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.532517   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.534246   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:03.529465   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.530241   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.532007   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.532517   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.534246   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:03.538539 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:03.538550 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:06.101738 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:06.112716 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:06.112778 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:06.139740 1302865 cri.go:89] found id: ""
	I1213 14:57:06.139753 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.139759 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:06.139770 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:06.139831 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:06.169906 1302865 cri.go:89] found id: ""
	I1213 14:57:06.169920 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.169927 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:06.169932 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:06.169993 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:06.194468 1302865 cri.go:89] found id: ""
	I1213 14:57:06.194482 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.194492 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:06.194497 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:06.194556 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:06.219346 1302865 cri.go:89] found id: ""
	I1213 14:57:06.219360 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.219367 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:06.219372 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:06.219466 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:06.244844 1302865 cri.go:89] found id: ""
	I1213 14:57:06.244858 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.244865 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:06.244870 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:06.244928 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:06.269412 1302865 cri.go:89] found id: ""
	I1213 14:57:06.269425 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.269433 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:06.269438 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:06.269498 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:06.293947 1302865 cri.go:89] found id: ""
	I1213 14:57:06.293960 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.293967 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:06.293975 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:06.293991 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:06.320232 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:06.320249 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:06.375210 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:06.375229 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:06.392065 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:06.392081 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:06.457910 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:06.449858   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.450478   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.452122   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.452601   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.454099   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:06.449858   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.450478   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.452122   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.452601   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.454099   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:06.457920 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:06.457931 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:09.020376 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:09.030584 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:09.030644 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:09.057441 1302865 cri.go:89] found id: ""
	I1213 14:57:09.057455 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.057462 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:09.057467 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:09.057529 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:09.091252 1302865 cri.go:89] found id: ""
	I1213 14:57:09.091266 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.091273 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:09.091277 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:09.091357 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:09.133954 1302865 cri.go:89] found id: ""
	I1213 14:57:09.133969 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.133976 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:09.133981 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:09.134041 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:09.161351 1302865 cri.go:89] found id: ""
	I1213 14:57:09.161365 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.161372 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:09.161386 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:09.161449 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:09.186493 1302865 cri.go:89] found id: ""
	I1213 14:57:09.186507 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.186515 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:09.186519 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:09.186579 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:09.210752 1302865 cri.go:89] found id: ""
	I1213 14:57:09.210766 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.210774 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:09.210779 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:09.210841 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:09.235216 1302865 cri.go:89] found id: ""
	I1213 14:57:09.235231 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.235238 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:09.235246 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:09.235256 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:09.290010 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:09.290030 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:09.307105 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:09.307122 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:09.373837 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:09.365574   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.366312   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.368046   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.368412   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.369910   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:09.365574   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.366312   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.368046   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.368412   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.369910   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:09.373848 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:09.373862 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:09.435916 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:09.435937 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:11.968947 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:11.978917 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:11.978976 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:12.003367 1302865 cri.go:89] found id: ""
	I1213 14:57:12.003387 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.003395 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:12.003401 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:12.003472 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:12.030862 1302865 cri.go:89] found id: ""
	I1213 14:57:12.030876 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.030883 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:12.030889 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:12.030947 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:12.055991 1302865 cri.go:89] found id: ""
	I1213 14:57:12.056006 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.056014 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:12.056020 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:12.056078 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:12.088685 1302865 cri.go:89] found id: ""
	I1213 14:57:12.088699 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.088706 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:12.088711 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:12.088771 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:12.119175 1302865 cri.go:89] found id: ""
	I1213 14:57:12.119199 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.119206 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:12.119212 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:12.119276 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:12.148170 1302865 cri.go:89] found id: ""
	I1213 14:57:12.148192 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.148199 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:12.148204 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:12.148276 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:12.173907 1302865 cri.go:89] found id: ""
	I1213 14:57:12.173929 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.173936 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:12.173944 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:12.173955 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:12.230024 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:12.230044 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:12.249202 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:12.249219 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:12.317257 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:12.307463   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.308131   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.309930   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.310545   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.312207   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:12.307463   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.308131   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.309930   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.310545   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.312207   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:12.317267 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:12.317284 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:12.384433 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:12.384455 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:14.917091 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:14.927788 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:14.927850 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:14.953190 1302865 cri.go:89] found id: ""
	I1213 14:57:14.953205 1302865 logs.go:282] 0 containers: []
	W1213 14:57:14.953212 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:14.953226 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:14.953289 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:14.978043 1302865 cri.go:89] found id: ""
	I1213 14:57:14.978068 1302865 logs.go:282] 0 containers: []
	W1213 14:57:14.978075 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:14.978081 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:14.978175 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:15.004731 1302865 cri.go:89] found id: ""
	I1213 14:57:15.004749 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.004756 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:15.004761 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:15.004846 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:15.048669 1302865 cri.go:89] found id: ""
	I1213 14:57:15.048685 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.048693 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:15.048698 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:15.048777 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:15.085505 1302865 cri.go:89] found id: ""
	I1213 14:57:15.085520 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.085528 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:15.085534 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:15.085607 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:15.124753 1302865 cri.go:89] found id: ""
	I1213 14:57:15.124776 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.124784 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:15.124790 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:15.124860 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:15.168668 1302865 cri.go:89] found id: ""
	I1213 14:57:15.168682 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.168690 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:15.168698 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:15.168720 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:15.236878 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:15.228546   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.229113   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.230853   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.231344   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.232993   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:15.228546   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.229113   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.230853   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.231344   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.232993   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:15.236889 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:15.236899 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:15.299331 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:15.299361 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:15.331125 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:15.331142 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:15.391451 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:15.391478 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:17.910179 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:17.920514 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:17.920590 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:17.945066 1302865 cri.go:89] found id: ""
	I1213 14:57:17.945081 1302865 logs.go:282] 0 containers: []
	W1213 14:57:17.945088 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:17.945094 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:17.945152 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:17.972856 1302865 cri.go:89] found id: ""
	I1213 14:57:17.972870 1302865 logs.go:282] 0 containers: []
	W1213 14:57:17.972878 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:17.972882 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:17.972944 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:17.999205 1302865 cri.go:89] found id: ""
	I1213 14:57:17.999219 1302865 logs.go:282] 0 containers: []
	W1213 14:57:17.999226 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:17.999231 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:17.999288 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:18.034164 1302865 cri.go:89] found id: ""
	I1213 14:57:18.034178 1302865 logs.go:282] 0 containers: []
	W1213 14:57:18.034185 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:18.034190 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:18.034255 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:18.060346 1302865 cri.go:89] found id: ""
	I1213 14:57:18.060361 1302865 logs.go:282] 0 containers: []
	W1213 14:57:18.060368 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:18.060373 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:18.060438 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:18.089688 1302865 cri.go:89] found id: ""
	I1213 14:57:18.089702 1302865 logs.go:282] 0 containers: []
	W1213 14:57:18.089710 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:18.089718 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:18.089780 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:18.128859 1302865 cri.go:89] found id: ""
	I1213 14:57:18.128874 1302865 logs.go:282] 0 containers: []
	W1213 14:57:18.128881 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:18.128889 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:18.128899 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:18.188820 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:18.188842 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:18.206229 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:18.206247 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:18.277989 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:18.269084   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.269641   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.271489   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.272150   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.273959   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:18.269084   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.269641   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.271489   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.272150   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.273959   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:18.277999 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:18.278009 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:18.339945 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:18.339965 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:20.869114 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:20.879800 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:20.879866 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:20.905760 1302865 cri.go:89] found id: ""
	I1213 14:57:20.905774 1302865 logs.go:282] 0 containers: []
	W1213 14:57:20.905781 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:20.905786 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:20.905849 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:20.931353 1302865 cri.go:89] found id: ""
	I1213 14:57:20.931367 1302865 logs.go:282] 0 containers: []
	W1213 14:57:20.931374 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:20.931379 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:20.931445 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:20.956682 1302865 cri.go:89] found id: ""
	I1213 14:57:20.956696 1302865 logs.go:282] 0 containers: []
	W1213 14:57:20.956704 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:20.956709 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:20.956769 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:20.980824 1302865 cri.go:89] found id: ""
	I1213 14:57:20.980838 1302865 logs.go:282] 0 containers: []
	W1213 14:57:20.980845 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:20.980850 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:20.980909 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:21.008951 1302865 cri.go:89] found id: ""
	I1213 14:57:21.008974 1302865 logs.go:282] 0 containers: []
	W1213 14:57:21.008982 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:21.008987 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:21.009058 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:21.038190 1302865 cri.go:89] found id: ""
	I1213 14:57:21.038204 1302865 logs.go:282] 0 containers: []
	W1213 14:57:21.038211 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:21.038216 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:21.038277 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:21.063608 1302865 cri.go:89] found id: ""
	I1213 14:57:21.063622 1302865 logs.go:282] 0 containers: []
	W1213 14:57:21.063630 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:21.063638 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:21.063648 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:21.132089 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:21.132109 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:21.171889 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:21.171908 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:21.230786 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:21.230806 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:21.247733 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:21.247753 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:21.318785 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:21.309659   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.310476   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.312082   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.312759   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.314434   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:21.309659   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.310476   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.312082   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.312759   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.314434   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:23.819828 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:23.830541 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:23.830604 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:23.853826 1302865 cri.go:89] found id: ""
	I1213 14:57:23.853840 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.853856 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:23.853862 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:23.853933 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:23.879146 1302865 cri.go:89] found id: ""
	I1213 14:57:23.879169 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.879177 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:23.879182 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:23.879253 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:23.904357 1302865 cri.go:89] found id: ""
	I1213 14:57:23.904371 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.904379 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:23.904384 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:23.904450 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:23.929036 1302865 cri.go:89] found id: ""
	I1213 14:57:23.929050 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.929058 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:23.929063 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:23.929124 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:23.954748 1302865 cri.go:89] found id: ""
	I1213 14:57:23.954762 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.954779 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:23.954785 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:23.954854 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:23.979661 1302865 cri.go:89] found id: ""
	I1213 14:57:23.979676 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.979683 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:23.979687 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:23.979750 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:24.009902 1302865 cri.go:89] found id: ""
	I1213 14:57:24.009918 1302865 logs.go:282] 0 containers: []
	W1213 14:57:24.009925 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:24.009935 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:24.009946 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:24.079943 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:24.070877   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.071501   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.073325   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.073881   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.075497   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:24.070877   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.071501   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.073325   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.073881   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.075497   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:24.079954 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:24.079966 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:24.144015 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:24.144037 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:24.174637 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:24.174654 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:24.235392 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:24.235413 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:26.753238 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:26.763339 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:26.763404 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:26.788474 1302865 cri.go:89] found id: ""
	I1213 14:57:26.788487 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.788494 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:26.788499 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:26.788559 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:26.814440 1302865 cri.go:89] found id: ""
	I1213 14:57:26.814454 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.814461 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:26.814466 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:26.814524 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:26.841795 1302865 cri.go:89] found id: ""
	I1213 14:57:26.841809 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.841816 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:26.841821 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:26.841880 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:26.869399 1302865 cri.go:89] found id: ""
	I1213 14:57:26.869413 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.869420 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:26.869425 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:26.869482 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:26.892445 1302865 cri.go:89] found id: ""
	I1213 14:57:26.892459 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.892467 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:26.892472 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:26.892535 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:26.916537 1302865 cri.go:89] found id: ""
	I1213 14:57:26.916558 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.916565 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:26.916570 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:26.916639 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:26.940628 1302865 cri.go:89] found id: ""
	I1213 14:57:26.940650 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.940658 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:26.940671 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:26.940681 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:26.969808 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:26.969827 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:27.025191 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:27.025211 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:27.042465 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:27.042482 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:27.122593 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:27.114680   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.115527   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.117159   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.117448   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.118857   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:27.114680   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.115527   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.117159   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.117448   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.118857   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:27.122618 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:27.122628 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:29.693191 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:29.703585 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:29.703652 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:29.732578 1302865 cri.go:89] found id: ""
	I1213 14:57:29.732593 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.732614 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:29.732621 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:29.732686 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:29.757517 1302865 cri.go:89] found id: ""
	I1213 14:57:29.757531 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.757538 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:29.757543 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:29.757604 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:29.785456 1302865 cri.go:89] found id: ""
	I1213 14:57:29.785470 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.785476 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:29.785482 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:29.785544 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:29.809997 1302865 cri.go:89] found id: ""
	I1213 14:57:29.810011 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.810018 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:29.810023 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:29.810085 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:29.834277 1302865 cri.go:89] found id: ""
	I1213 14:57:29.834292 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.834299 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:29.834304 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:29.834366 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:29.858653 1302865 cri.go:89] found id: ""
	I1213 14:57:29.858667 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.858675 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:29.858686 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:29.858749 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:29.884435 1302865 cri.go:89] found id: ""
	I1213 14:57:29.884450 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.884456 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:29.884464 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:29.884477 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:29.911338 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:29.911356 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:29.966819 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:29.966838 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:29.985125 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:29.985141 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:30.070789 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:30.059761   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.060525   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.063144   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.063902   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.066109   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:30.059761   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.060525   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.063144   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.063902   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.066109   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:30.070800 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:30.070811 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:32.643832 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:32.654329 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:32.654399 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:32.687375 1302865 cri.go:89] found id: ""
	I1213 14:57:32.687390 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.687398 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:32.687403 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:32.687465 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:32.712437 1302865 cri.go:89] found id: ""
	I1213 14:57:32.712452 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.712460 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:32.712465 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:32.712529 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:32.738220 1302865 cri.go:89] found id: ""
	I1213 14:57:32.738234 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.738241 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:32.738247 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:32.738310 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:32.763211 1302865 cri.go:89] found id: ""
	I1213 14:57:32.763226 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.763233 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:32.763238 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:32.763299 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:32.789049 1302865 cri.go:89] found id: ""
	I1213 14:57:32.789063 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.789071 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:32.789077 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:32.789141 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:32.815194 1302865 cri.go:89] found id: ""
	I1213 14:57:32.815208 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.815215 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:32.815221 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:32.815284 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:32.840629 1302865 cri.go:89] found id: ""
	I1213 14:57:32.840646 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.840653 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:32.840661 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:32.840672 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:32.868556 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:32.868574 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:32.923451 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:32.923472 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:32.940492 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:32.940508 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:33.014646 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:33.000281   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.001080   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.003301   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.004000   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.006154   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:33.000281   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.001080   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.003301   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.004000   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.006154   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:33.014656 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:33.014680 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:35.576582 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:35.586876 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:35.586939 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:35.612619 1302865 cri.go:89] found id: ""
	I1213 14:57:35.612634 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.612641 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:35.612646 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:35.612714 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:35.637275 1302865 cri.go:89] found id: ""
	I1213 14:57:35.637289 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.637296 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:35.637302 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:35.637363 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:35.661936 1302865 cri.go:89] found id: ""
	I1213 14:57:35.661950 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.661957 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:35.661962 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:35.662035 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:35.691702 1302865 cri.go:89] found id: ""
	I1213 14:57:35.691716 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.691722 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:35.691727 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:35.691789 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:35.719594 1302865 cri.go:89] found id: ""
	I1213 14:57:35.719608 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.719614 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:35.719619 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:35.719685 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:35.747602 1302865 cri.go:89] found id: ""
	I1213 14:57:35.747617 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.747624 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:35.747629 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:35.747690 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:35.772489 1302865 cri.go:89] found id: ""
	I1213 14:57:35.772503 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.772510 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:35.772517 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:35.772534 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:35.801457 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:35.801474 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:35.859688 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:35.859708 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:35.877069 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:35.877087 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:35.942565 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:35.934502   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.935224   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.936888   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.937197   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.938690   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:35.934502   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.935224   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.936888   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.937197   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.938690   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:35.942576 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:35.942595 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:38.506862 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:38.517509 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:38.517575 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:38.542481 1302865 cri.go:89] found id: ""
	I1213 14:57:38.542496 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.542512 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:38.542517 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:38.542586 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:38.567177 1302865 cri.go:89] found id: ""
	I1213 14:57:38.567191 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.567198 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:38.567202 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:38.567264 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:38.591952 1302865 cri.go:89] found id: ""
	I1213 14:57:38.591967 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.591974 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:38.591979 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:38.592036 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:38.615589 1302865 cri.go:89] found id: ""
	I1213 14:57:38.615604 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.615619 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:38.615625 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:38.615697 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:38.641025 1302865 cri.go:89] found id: ""
	I1213 14:57:38.641039 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.641046 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:38.641051 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:38.641115 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:38.666245 1302865 cri.go:89] found id: ""
	I1213 14:57:38.666259 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.666276 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:38.666282 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:38.666355 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:38.691177 1302865 cri.go:89] found id: ""
	I1213 14:57:38.691191 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.691198 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:38.691206 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:38.691217 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:38.748984 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:38.749004 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:38.765774 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:38.765791 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:38.833656 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:38.825292   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.825956   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.827650   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.828246   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.829830   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:38.825292   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.825956   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.827650   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.828246   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.829830   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:38.833672 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:38.833683 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:38.895503 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:38.895524 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:41.424760 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:41.435082 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:41.435154 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:41.460250 1302865 cri.go:89] found id: ""
	I1213 14:57:41.460265 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.460273 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:41.460278 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:41.460338 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:41.490003 1302865 cri.go:89] found id: ""
	I1213 14:57:41.490017 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.490024 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:41.490029 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:41.490094 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:41.515086 1302865 cri.go:89] found id: ""
	I1213 14:57:41.515100 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.515107 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:41.515112 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:41.515173 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:41.540169 1302865 cri.go:89] found id: ""
	I1213 14:57:41.540183 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.540205 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:41.540211 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:41.540279 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:41.564345 1302865 cri.go:89] found id: ""
	I1213 14:57:41.564358 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.564365 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:41.564370 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:41.564429 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:41.589001 1302865 cri.go:89] found id: ""
	I1213 14:57:41.589015 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.589022 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:41.589027 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:41.589091 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:41.617434 1302865 cri.go:89] found id: ""
	I1213 14:57:41.617447 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.617455 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:41.617462 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:41.617471 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:41.683384 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:41.683411 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:41.711592 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:41.711611 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:41.769286 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:41.769305 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:41.786199 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:41.786219 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:41.854485 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:41.846163   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.846886   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.848432   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.848940   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.850514   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:41.846163   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.846886   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.848432   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.848940   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.850514   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:44.355606 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:44.369969 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:44.370032 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:44.401460 1302865 cri.go:89] found id: ""
	I1213 14:57:44.401474 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.401481 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:44.401486 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:44.401548 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:44.431513 1302865 cri.go:89] found id: ""
	I1213 14:57:44.431527 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.431534 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:44.431539 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:44.431600 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:44.457242 1302865 cri.go:89] found id: ""
	I1213 14:57:44.457256 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.457263 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:44.457268 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:44.457329 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:44.482224 1302865 cri.go:89] found id: ""
	I1213 14:57:44.482238 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.482245 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:44.482250 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:44.482313 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:44.509856 1302865 cri.go:89] found id: ""
	I1213 14:57:44.509871 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.509878 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:44.509884 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:44.509950 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:44.533977 1302865 cri.go:89] found id: ""
	I1213 14:57:44.533992 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.533999 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:44.534005 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:44.534069 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:44.562015 1302865 cri.go:89] found id: ""
	I1213 14:57:44.562029 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.562036 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:44.562044 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:44.562055 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:44.629999 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:44.621407   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.622108   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.623865   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.624500   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.626099   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:44.621407   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.622108   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.623865   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.624500   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.626099   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:44.630009 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:44.630020 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:44.697021 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:44.697042 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:44.725319 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:44.725336 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:44.783033 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:44.783053 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:47.300684 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:47.311369 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:47.311431 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:47.343773 1302865 cri.go:89] found id: ""
	I1213 14:57:47.343787 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.343794 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:47.343800 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:47.343864 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:47.373867 1302865 cri.go:89] found id: ""
	I1213 14:57:47.373881 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.373888 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:47.373893 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:47.373950 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:47.409488 1302865 cri.go:89] found id: ""
	I1213 14:57:47.409503 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.409510 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:47.409515 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:47.409576 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:47.436144 1302865 cri.go:89] found id: ""
	I1213 14:57:47.436160 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.436166 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:47.436172 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:47.436231 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:47.459642 1302865 cri.go:89] found id: ""
	I1213 14:57:47.459656 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.459664 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:47.459669 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:47.459728 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:47.488525 1302865 cri.go:89] found id: ""
	I1213 14:57:47.488539 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.488546 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:47.488589 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:47.488660 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:47.513277 1302865 cri.go:89] found id: ""
	I1213 14:57:47.513304 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.513312 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:47.513320 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:47.513333 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:47.569182 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:47.569201 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:47.586016 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:47.586033 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:47.657399 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:47.648527   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.649441   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.651289   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.651877   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.653400   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:47.648527   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.649441   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.651289   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.651877   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.653400   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:47.657410 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:47.657421 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:47.719756 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:47.719776 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:50.250366 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:50.261360 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:50.261430 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:50.285575 1302865 cri.go:89] found id: ""
	I1213 14:57:50.285588 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.285595 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:50.285601 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:50.285657 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:50.313925 1302865 cri.go:89] found id: ""
	I1213 14:57:50.313939 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.313946 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:50.313951 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:50.314025 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:50.350634 1302865 cri.go:89] found id: ""
	I1213 14:57:50.350653 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.350660 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:50.350665 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:50.350725 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:50.377901 1302865 cri.go:89] found id: ""
	I1213 14:57:50.377915 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.377922 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:50.377927 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:50.377987 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:50.408528 1302865 cri.go:89] found id: ""
	I1213 14:57:50.408550 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.408557 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:50.408562 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:50.408637 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:50.434189 1302865 cri.go:89] found id: ""
	I1213 14:57:50.434203 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.434212 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:50.434217 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:50.434275 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:50.459353 1302865 cri.go:89] found id: ""
	I1213 14:57:50.459367 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.459373 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:50.459381 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:50.459391 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:50.515565 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:50.515585 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:50.532866 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:50.532883 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:50.599094 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:50.590849   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.591681   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.593186   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.593701   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.595193   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:50.590849   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.591681   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.593186   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.593701   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.595193   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:50.599104 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:50.599115 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:50.663140 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:50.663159 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:53.200108 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:53.210621 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:53.210684 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:53.236457 1302865 cri.go:89] found id: ""
	I1213 14:57:53.236471 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.236478 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:53.236483 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:53.236545 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:53.269649 1302865 cri.go:89] found id: ""
	I1213 14:57:53.269664 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.269670 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:53.269677 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:53.269738 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:53.293759 1302865 cri.go:89] found id: ""
	I1213 14:57:53.293774 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.293781 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:53.293786 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:53.293846 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:53.318675 1302865 cri.go:89] found id: ""
	I1213 14:57:53.318690 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.318696 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:53.318701 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:53.318765 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:53.353544 1302865 cri.go:89] found id: ""
	I1213 14:57:53.353558 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.353564 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:53.353569 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:53.353630 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:53.381535 1302865 cri.go:89] found id: ""
	I1213 14:57:53.381549 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.381565 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:53.381571 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:53.381641 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:53.408473 1302865 cri.go:89] found id: ""
	I1213 14:57:53.408487 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.408494 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:53.408502 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:53.408514 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:53.463646 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:53.463670 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:53.480500 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:53.480518 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:53.545969 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:53.538062   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.538490   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.540123   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.540597   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.542043   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:53.538062   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.538490   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.540123   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.540597   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.542043   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:53.545979 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:53.545991 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:53.607729 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:53.607750 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:56.139407 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:56.150264 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:56.150335 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:56.175852 1302865 cri.go:89] found id: ""
	I1213 14:57:56.175866 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.175873 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:56.175878 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:56.175942 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:56.202887 1302865 cri.go:89] found id: ""
	I1213 14:57:56.202901 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.202908 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:56.202921 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:56.202981 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:56.229038 1302865 cri.go:89] found id: ""
	I1213 14:57:56.229053 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.229060 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:56.229065 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:56.229125 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:56.253081 1302865 cri.go:89] found id: ""
	I1213 14:57:56.253096 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.253103 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:56.253108 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:56.253172 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:56.277822 1302865 cri.go:89] found id: ""
	I1213 14:57:56.277836 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.277843 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:56.277849 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:56.277910 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:56.302419 1302865 cri.go:89] found id: ""
	I1213 14:57:56.302435 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.302442 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:56.302447 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:56.302508 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:56.327036 1302865 cri.go:89] found id: ""
	I1213 14:57:56.327050 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.327057 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:56.327066 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:56.327078 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:56.353968 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:56.353986 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:56.426915 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:56.418628   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.419173   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.420794   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.421363   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.422912   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:56.418628   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.419173   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.420794   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.421363   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.422912   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:56.426926 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:56.426943 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:56.488491 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:56.488513 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:56.516737 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:56.516753 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:59.077330 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:59.087745 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:59.087809 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:59.113689 1302865 cri.go:89] found id: ""
	I1213 14:57:59.113703 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.113710 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:59.113715 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:59.113774 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:59.138884 1302865 cri.go:89] found id: ""
	I1213 14:57:59.138898 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.138905 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:59.138911 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:59.138976 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:59.164226 1302865 cri.go:89] found id: ""
	I1213 14:57:59.164240 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.164246 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:59.164254 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:59.164312 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:59.189753 1302865 cri.go:89] found id: ""
	I1213 14:57:59.189767 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.189774 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:59.189779 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:59.189840 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:59.219066 1302865 cri.go:89] found id: ""
	I1213 14:57:59.219080 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.219086 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:59.219092 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:59.219152 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:59.243456 1302865 cri.go:89] found id: ""
	I1213 14:57:59.243470 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.243477 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:59.243482 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:59.243544 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:59.267676 1302865 cri.go:89] found id: ""
	I1213 14:57:59.267692 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.267699 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:59.267707 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:59.267719 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:59.284600 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:59.284617 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:59.356184 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:59.346354   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.347267   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.349163   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.349876   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.351596   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:59.346354   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.347267   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.349163   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.349876   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.351596   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:59.356202 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:59.356215 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:59.427513 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:59.427535 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:59.459203 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:59.459220 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:02.016233 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:02.027182 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:02.027246 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:02.053453 1302865 cri.go:89] found id: ""
	I1213 14:58:02.053467 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.053475 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:02.053480 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:02.053543 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:02.081288 1302865 cri.go:89] found id: ""
	I1213 14:58:02.081303 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.081310 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:02.081315 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:02.081377 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:02.106556 1302865 cri.go:89] found id: ""
	I1213 14:58:02.106572 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.106579 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:02.106585 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:02.106645 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:02.131201 1302865 cri.go:89] found id: ""
	I1213 14:58:02.131215 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.131221 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:02.131226 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:02.131286 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:02.156170 1302865 cri.go:89] found id: ""
	I1213 14:58:02.156194 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.156202 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:02.156207 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:02.156275 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:02.185059 1302865 cri.go:89] found id: ""
	I1213 14:58:02.185073 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.185080 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:02.185086 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:02.185153 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:02.209854 1302865 cri.go:89] found id: ""
	I1213 14:58:02.209870 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.209884 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:02.209893 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:02.209903 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:02.279934 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:02.270936   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.271416   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.273300   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.273902   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.275651   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:02.270936   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.271416   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.273300   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.273902   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.275651   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:02.279958 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:02.279970 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:02.341869 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:02.341888 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:02.370761 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:02.370783 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:02.431851 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:02.431869 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:04.950137 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:04.960995 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:04.961059 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:04.986243 1302865 cri.go:89] found id: ""
	I1213 14:58:04.986257 1302865 logs.go:282] 0 containers: []
	W1213 14:58:04.986264 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:04.986269 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:04.986329 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:05.016170 1302865 cri.go:89] found id: ""
	I1213 14:58:05.016192 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.016200 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:05.016206 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:05.016270 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:05.042103 1302865 cri.go:89] found id: ""
	I1213 14:58:05.042117 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.042124 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:05.042129 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:05.042188 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:05.066050 1302865 cri.go:89] found id: ""
	I1213 14:58:05.066065 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.066071 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:05.066077 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:05.066141 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:05.091600 1302865 cri.go:89] found id: ""
	I1213 14:58:05.091615 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.091623 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:05.091634 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:05.091698 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:05.117406 1302865 cri.go:89] found id: ""
	I1213 14:58:05.117420 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.117427 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:05.117432 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:05.117491 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:05.143774 1302865 cri.go:89] found id: ""
	I1213 14:58:05.143788 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.143794 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:05.143802 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:05.143823 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:05.198717 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:05.198736 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:05.216110 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:05.216127 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:05.281771 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:05.273944   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.274471   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.276069   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.276389   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.277907   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:05.273944   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.274471   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.276069   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.276389   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.277907   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:05.281792 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:05.281804 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:05.344051 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:05.344070 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:07.872032 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:07.883862 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:07.883925 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:07.908603 1302865 cri.go:89] found id: ""
	I1213 14:58:07.908616 1302865 logs.go:282] 0 containers: []
	W1213 14:58:07.908623 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:07.908628 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:07.908696 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:07.932609 1302865 cri.go:89] found id: ""
	I1213 14:58:07.932624 1302865 logs.go:282] 0 containers: []
	W1213 14:58:07.932631 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:07.932636 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:07.932729 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:07.957476 1302865 cri.go:89] found id: ""
	I1213 14:58:07.957490 1302865 logs.go:282] 0 containers: []
	W1213 14:58:07.957497 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:07.957502 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:07.957561 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:07.983994 1302865 cri.go:89] found id: ""
	I1213 14:58:07.984014 1302865 logs.go:282] 0 containers: []
	W1213 14:58:07.984022 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:07.984027 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:07.984090 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:08.016758 1302865 cri.go:89] found id: ""
	I1213 14:58:08.016772 1302865 logs.go:282] 0 containers: []
	W1213 14:58:08.016779 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:08.016784 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:08.016850 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:08.048311 1302865 cri.go:89] found id: ""
	I1213 14:58:08.048326 1302865 logs.go:282] 0 containers: []
	W1213 14:58:08.048333 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:08.048338 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:08.048404 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:08.074196 1302865 cri.go:89] found id: ""
	I1213 14:58:08.074211 1302865 logs.go:282] 0 containers: []
	W1213 14:58:08.074219 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:08.074226 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:08.074237 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:08.139046 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:08.139073 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:08.167121 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:08.167141 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:08.222634 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:08.222664 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:08.240309 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:08.240325 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:08.310479 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:08.301605   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.302332   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.304082   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.304629   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.306246   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:08.301605   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.302332   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.304082   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.304629   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.306246   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:10.810723 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:10.820844 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:10.820953 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:10.865862 1302865 cri.go:89] found id: ""
	I1213 14:58:10.865875 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.865882 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:10.865888 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:10.865959 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:10.896607 1302865 cri.go:89] found id: ""
	I1213 14:58:10.896621 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.896628 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:10.896634 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:10.896710 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:10.924657 1302865 cri.go:89] found id: ""
	I1213 14:58:10.924671 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.924678 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:10.924684 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:10.924748 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:10.949300 1302865 cri.go:89] found id: ""
	I1213 14:58:10.949314 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.949321 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:10.949326 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:10.949388 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:10.973896 1302865 cri.go:89] found id: ""
	I1213 14:58:10.973910 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.973917 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:10.973922 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:10.973983 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:10.998200 1302865 cri.go:89] found id: ""
	I1213 14:58:10.998214 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.998231 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:10.998237 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:10.998295 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:11.024841 1302865 cri.go:89] found id: ""
	I1213 14:58:11.024856 1302865 logs.go:282] 0 containers: []
	W1213 14:58:11.024863 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:11.024871 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:11.024886 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:11.092350 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:11.083613   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.084237   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.086051   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.086531   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.088132   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:11.083613   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.084237   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.086051   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.086531   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.088132   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:11.092361 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:11.092372 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:11.154591 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:11.154612 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:11.187883 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:11.187899 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:11.248594 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:11.248613 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:13.766160 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:13.776057 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:13.776115 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:13.800863 1302865 cri.go:89] found id: ""
	I1213 14:58:13.800877 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.800884 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:13.800889 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:13.800990 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:13.825283 1302865 cri.go:89] found id: ""
	I1213 14:58:13.825298 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.825305 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:13.825309 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:13.825368 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:13.857732 1302865 cri.go:89] found id: ""
	I1213 14:58:13.857746 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.857753 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:13.857758 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:13.857816 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:13.891546 1302865 cri.go:89] found id: ""
	I1213 14:58:13.891560 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.891566 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:13.891572 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:13.891629 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:13.918725 1302865 cri.go:89] found id: ""
	I1213 14:58:13.918738 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.918746 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:13.918750 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:13.918810 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:13.942434 1302865 cri.go:89] found id: ""
	I1213 14:58:13.942448 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.942455 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:13.942460 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:13.942521 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:13.966591 1302865 cri.go:89] found id: ""
	I1213 14:58:13.966606 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.966613 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:13.966621 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:13.966632 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:13.983200 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:13.983217 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:14.050601 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:14.041722   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.042310   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.044028   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.044598   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.046179   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:14.041722   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.042310   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.044028   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.044598   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.046179   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:14.050610 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:14.050622 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:14.111742 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:14.111761 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:14.139171 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:14.139189 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:16.694504 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:16.704690 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:16.704753 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:16.730421 1302865 cri.go:89] found id: ""
	I1213 14:58:16.730436 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.730444 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:16.730449 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:16.730510 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:16.755642 1302865 cri.go:89] found id: ""
	I1213 14:58:16.755657 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.755676 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:16.755681 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:16.755741 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:16.780583 1302865 cri.go:89] found id: ""
	I1213 14:58:16.780597 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.780604 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:16.780610 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:16.780685 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:16.809520 1302865 cri.go:89] found id: ""
	I1213 14:58:16.809534 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.809542 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:16.809547 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:16.809606 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:16.845772 1302865 cri.go:89] found id: ""
	I1213 14:58:16.845787 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.845794 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:16.845799 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:16.845867 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:16.871303 1302865 cri.go:89] found id: ""
	I1213 14:58:16.871338 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.871345 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:16.871350 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:16.871411 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:16.897846 1302865 cri.go:89] found id: ""
	I1213 14:58:16.897859 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.897866 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:16.897875 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:16.897885 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:16.959059 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:16.959079 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:16.996406 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:16.996421 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:17.052568 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:17.052589 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:17.069678 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:17.069696 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:17.133677 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:17.125422   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.125964   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.127591   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.128083   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.129662   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:17.125422   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.125964   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.127591   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.128083   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.129662   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:19.633920 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:19.644044 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:19.644109 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:19.668667 1302865 cri.go:89] found id: ""
	I1213 14:58:19.668681 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.668688 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:19.668693 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:19.668759 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:19.693045 1302865 cri.go:89] found id: ""
	I1213 14:58:19.693059 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.693066 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:19.693071 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:19.693134 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:19.717622 1302865 cri.go:89] found id: ""
	I1213 14:58:19.717637 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.717643 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:19.717649 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:19.717708 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:19.742933 1302865 cri.go:89] found id: ""
	I1213 14:58:19.742948 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.742954 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:19.742962 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:19.743024 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:19.767055 1302865 cri.go:89] found id: ""
	I1213 14:58:19.767069 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.767076 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:19.767081 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:19.767139 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:19.793086 1302865 cri.go:89] found id: ""
	I1213 14:58:19.793100 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.793107 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:19.793112 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:19.793172 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:19.816884 1302865 cri.go:89] found id: ""
	I1213 14:58:19.816898 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.816905 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:19.816912 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:19.816927 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:19.833746 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:19.833763 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:19.912181 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:19.904591   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.905016   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.906518   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.906824   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.908282   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:19.904591   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.905016   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.906518   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.906824   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.908282   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:19.912191 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:19.912202 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:19.973611 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:19.973631 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:20.005249 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:20.005269 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:22.571015 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:22.581487 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:22.581553 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:22.606385 1302865 cri.go:89] found id: ""
	I1213 14:58:22.606399 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.606405 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:22.606411 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:22.606466 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:22.631290 1302865 cri.go:89] found id: ""
	I1213 14:58:22.631304 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.631330 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:22.631341 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:22.631402 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:22.656039 1302865 cri.go:89] found id: ""
	I1213 14:58:22.656053 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.656059 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:22.656064 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:22.656123 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:22.680255 1302865 cri.go:89] found id: ""
	I1213 14:58:22.680268 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.680275 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:22.680281 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:22.680339 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:22.705412 1302865 cri.go:89] found id: ""
	I1213 14:58:22.705426 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.705434 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:22.705439 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:22.705501 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:22.729869 1302865 cri.go:89] found id: ""
	I1213 14:58:22.729885 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.729891 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:22.729897 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:22.729961 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:22.757980 1302865 cri.go:89] found id: ""
	I1213 14:58:22.757994 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.758001 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:22.758009 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:22.758022 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:22.774416 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:22.774433 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:22.850017 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:22.837138   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.837894   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.843564   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.844183   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.845796   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:22.837138   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.837894   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.843564   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.844183   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.845796   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:22.850034 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:22.850045 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:22.916305 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:22.916327 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:22.946422 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:22.946438 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:25.504766 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:25.515062 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:25.515129 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:25.539801 1302865 cri.go:89] found id: ""
	I1213 14:58:25.539815 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.539822 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:25.539827 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:25.539888 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:25.564134 1302865 cri.go:89] found id: ""
	I1213 14:58:25.564148 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.564155 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:25.564159 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:25.564218 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:25.588150 1302865 cri.go:89] found id: ""
	I1213 14:58:25.588165 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.588173 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:25.588178 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:25.588239 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:25.613567 1302865 cri.go:89] found id: ""
	I1213 14:58:25.613581 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.613588 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:25.613593 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:25.613659 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:25.643274 1302865 cri.go:89] found id: ""
	I1213 14:58:25.643290 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.643297 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:25.643303 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:25.643388 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:25.668136 1302865 cri.go:89] found id: ""
	I1213 14:58:25.668150 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.668157 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:25.668162 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:25.668223 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:25.693114 1302865 cri.go:89] found id: ""
	I1213 14:58:25.693128 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.693135 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:25.693143 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:25.693152 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:25.751087 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:25.751106 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:25.768578 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:25.768598 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:25.842306 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:25.826336   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.827040   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.829047   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.830610   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.833626   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:25.826336   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.827040   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.829047   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.830610   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.833626   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:25.842315 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:25.842325 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:25.934744 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:25.934771 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:28.468857 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:28.479478 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:28.479543 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:28.509273 1302865 cri.go:89] found id: ""
	I1213 14:58:28.509286 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.509293 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:28.509299 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:28.509360 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:28.535574 1302865 cri.go:89] found id: ""
	I1213 14:58:28.535588 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.535595 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:28.535601 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:28.535660 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:28.561231 1302865 cri.go:89] found id: ""
	I1213 14:58:28.561244 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.561251 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:28.561256 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:28.561316 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:28.586867 1302865 cri.go:89] found id: ""
	I1213 14:58:28.586881 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.586897 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:28.586903 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:28.586971 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:28.613781 1302865 cri.go:89] found id: ""
	I1213 14:58:28.613795 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.613802 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:28.613807 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:28.613865 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:28.639226 1302865 cri.go:89] found id: ""
	I1213 14:58:28.639247 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.639255 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:28.639260 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:28.639351 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:28.664957 1302865 cri.go:89] found id: ""
	I1213 14:58:28.664971 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.664977 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:28.664985 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:28.664995 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:28.681545 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:28.681562 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:28.746274 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:28.738221   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.738895   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.740429   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.740922   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.742410   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:28.738221   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.738895   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.740429   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.740922   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.742410   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:28.746286 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:28.746297 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:28.811866 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:28.811886 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:28.853916 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:28.853932 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:31.417796 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:31.427841 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:31.427906 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:31.454876 1302865 cri.go:89] found id: ""
	I1213 14:58:31.454890 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.454897 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:31.454903 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:31.454967 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:31.478745 1302865 cri.go:89] found id: ""
	I1213 14:58:31.478763 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.478770 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:31.478774 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:31.478834 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:31.504045 1302865 cri.go:89] found id: ""
	I1213 14:58:31.504059 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.504066 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:31.504071 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:31.504132 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:31.536667 1302865 cri.go:89] found id: ""
	I1213 14:58:31.536687 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.536694 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:31.536699 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:31.536759 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:31.561651 1302865 cri.go:89] found id: ""
	I1213 14:58:31.561665 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.561672 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:31.561679 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:31.561740 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:31.590467 1302865 cri.go:89] found id: ""
	I1213 14:58:31.590487 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.590494 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:31.590499 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:31.590572 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:31.621443 1302865 cri.go:89] found id: ""
	I1213 14:58:31.621457 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.621467 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:31.621475 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:31.621485 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:31.689190 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:31.680703   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.681594   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.682366   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.683913   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.684339   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:31.680703   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.681594   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.682366   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.683913   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.684339   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:31.689199 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:31.689210 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:31.750918 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:31.750940 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:31.777989 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:31.778007 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:31.837415 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:31.837438 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:34.355220 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:34.365583 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:34.365646 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:34.390861 1302865 cri.go:89] found id: ""
	I1213 14:58:34.390875 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.390882 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:34.390887 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:34.390945 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:34.419452 1302865 cri.go:89] found id: ""
	I1213 14:58:34.419466 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.419473 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:34.419478 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:34.419540 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:34.444048 1302865 cri.go:89] found id: ""
	I1213 14:58:34.444062 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.444069 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:34.444073 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:34.444135 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:34.472603 1302865 cri.go:89] found id: ""
	I1213 14:58:34.472617 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.472623 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:34.472629 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:34.472693 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:34.496330 1302865 cri.go:89] found id: ""
	I1213 14:58:34.496344 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.496351 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:34.496356 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:34.496415 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:34.521267 1302865 cri.go:89] found id: ""
	I1213 14:58:34.521281 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.521288 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:34.521294 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:34.521355 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:34.545219 1302865 cri.go:89] found id: ""
	I1213 14:58:34.545234 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.545241 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:34.545248 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:34.545263 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:34.611331 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:34.602304   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.603074   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.604885   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.605533   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.607098   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:34.602304   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.603074   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.604885   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.605533   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.607098   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:34.611342 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:34.611352 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:34.674005 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:34.674023 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:34.701768 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:34.701784 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:34.760313 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:34.760332 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:37.279813 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:37.289901 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:37.289961 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:37.314082 1302865 cri.go:89] found id: ""
	I1213 14:58:37.314097 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.314103 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:37.314115 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:37.314174 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:37.349456 1302865 cri.go:89] found id: ""
	I1213 14:58:37.349470 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.349477 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:37.349482 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:37.349540 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:37.376791 1302865 cri.go:89] found id: ""
	I1213 14:58:37.376805 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.376812 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:37.376817 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:37.376877 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:37.400702 1302865 cri.go:89] found id: ""
	I1213 14:58:37.400717 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.400724 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:37.400730 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:37.400792 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:37.424348 1302865 cri.go:89] found id: ""
	I1213 14:58:37.424363 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.424370 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:37.424375 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:37.424435 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:37.449182 1302865 cri.go:89] found id: ""
	I1213 14:58:37.449197 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.449204 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:37.449209 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:37.449270 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:37.476252 1302865 cri.go:89] found id: ""
	I1213 14:58:37.476266 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.476273 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:37.476280 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:37.476294 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:37.534602 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:37.534621 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:37.552019 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:37.552037 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:37.614270 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:37.605713   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.606478   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.608056   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.608699   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.610244   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:37.605713   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.606478   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.608056   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.608699   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.610244   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:37.614281 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:37.614292 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:37.676894 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:37.676913 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:40.209558 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:40.220003 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:40.220065 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:40.246553 1302865 cri.go:89] found id: ""
	I1213 14:58:40.246567 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.246574 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:40.246579 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:40.246642 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:40.270663 1302865 cri.go:89] found id: ""
	I1213 14:58:40.270677 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.270684 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:40.270689 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:40.270750 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:40.296263 1302865 cri.go:89] found id: ""
	I1213 14:58:40.296278 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.296285 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:40.296292 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:40.296352 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:40.320181 1302865 cri.go:89] found id: ""
	I1213 14:58:40.320195 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.320204 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:40.320208 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:40.320268 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:40.345140 1302865 cri.go:89] found id: ""
	I1213 14:58:40.345155 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.345162 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:40.345167 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:40.345236 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:40.368989 1302865 cri.go:89] found id: ""
	I1213 14:58:40.369003 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.369010 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:40.369015 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:40.369075 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:40.393631 1302865 cri.go:89] found id: ""
	I1213 14:58:40.393646 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.393653 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:40.393661 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:40.393672 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:40.421318 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:40.421334 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:40.480359 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:40.480379 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:40.497525 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:40.497544 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:40.565603 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:40.557124   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.557721   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.559295   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.559804   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.561673   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:40.557124   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.557721   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.559295   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.559804   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.561673   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:40.565614 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:40.565625 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:43.127433 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:43.141684 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:43.141744 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:43.166921 1302865 cri.go:89] found id: ""
	I1213 14:58:43.166935 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.166942 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:43.166947 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:43.167010 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:43.191796 1302865 cri.go:89] found id: ""
	I1213 14:58:43.191810 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.191817 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:43.191823 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:43.191883 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:43.220968 1302865 cri.go:89] found id: ""
	I1213 14:58:43.220982 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.220988 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:43.220993 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:43.221050 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:43.249138 1302865 cri.go:89] found id: ""
	I1213 14:58:43.249153 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.249160 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:43.249166 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:43.249226 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:43.273972 1302865 cri.go:89] found id: ""
	I1213 14:58:43.273986 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.273993 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:43.273998 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:43.274056 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:43.298424 1302865 cri.go:89] found id: ""
	I1213 14:58:43.298439 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.298446 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:43.298451 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:43.298523 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:43.326886 1302865 cri.go:89] found id: ""
	I1213 14:58:43.326900 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.326907 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:43.326915 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:43.326925 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:43.383183 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:43.383202 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:43.401545 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:43.401564 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:43.472321 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:43.463674   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.464321   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.466101   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.466707   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.468411   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:43.463674   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.464321   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.466101   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.466707   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.468411   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:43.472331 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:43.472347 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:43.535483 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:43.535504 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:46.069443 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:46.079671 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:46.079735 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:46.112232 1302865 cri.go:89] found id: ""
	I1213 14:58:46.112246 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.112263 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:46.112268 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:46.112334 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:46.143946 1302865 cri.go:89] found id: ""
	I1213 14:58:46.143960 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.143968 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:46.143973 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:46.144034 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:46.172869 1302865 cri.go:89] found id: ""
	I1213 14:58:46.172893 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.172901 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:46.172906 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:46.172969 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:46.198118 1302865 cri.go:89] found id: ""
	I1213 14:58:46.198132 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.198139 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:46.198144 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:46.198210 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:46.226657 1302865 cri.go:89] found id: ""
	I1213 14:58:46.226672 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.226679 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:46.226689 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:46.226750 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:46.250158 1302865 cri.go:89] found id: ""
	I1213 14:58:46.250183 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.250190 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:46.250199 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:46.250268 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:46.275259 1302865 cri.go:89] found id: ""
	I1213 14:58:46.275274 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.275281 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:46.275303 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:46.275335 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:46.349416 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:46.340779   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.341521   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.343041   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.343652   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.345289   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:46.340779   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.341521   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.343041   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.343652   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.345289   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:46.349427 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:46.349440 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:46.412854 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:46.412874 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:46.443625 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:46.443641 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:46.501088 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:46.501108 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:49.018999 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:49.029334 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:49.029404 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:49.054853 1302865 cri.go:89] found id: ""
	I1213 14:58:49.054867 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.054874 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:49.054879 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:49.054941 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:49.081166 1302865 cri.go:89] found id: ""
	I1213 14:58:49.081185 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.081193 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:49.081198 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:49.081261 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:49.109404 1302865 cri.go:89] found id: ""
	I1213 14:58:49.109418 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.109425 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:49.109430 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:49.109493 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:49.136643 1302865 cri.go:89] found id: ""
	I1213 14:58:49.136658 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.136665 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:49.136670 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:49.136741 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:49.165751 1302865 cri.go:89] found id: ""
	I1213 14:58:49.165765 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.165772 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:49.165777 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:49.165837 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:49.193225 1302865 cri.go:89] found id: ""
	I1213 14:58:49.193239 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.193246 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:49.193252 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:49.193314 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:49.221440 1302865 cri.go:89] found id: ""
	I1213 14:58:49.221455 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.221462 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:49.221470 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:49.221480 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:49.277216 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:49.277234 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:49.293907 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:49.293927 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:49.356075 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:49.348073   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.348458   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.350225   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.350571   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.352111   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:49.348073   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.348458   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.350225   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.350571   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.352111   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:49.356085 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:49.356095 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:49.418015 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:49.418034 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:51.951013 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:51.961457 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:51.961522 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:51.988624 1302865 cri.go:89] found id: ""
	I1213 14:58:51.988638 1302865 logs.go:282] 0 containers: []
	W1213 14:58:51.988645 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:51.988650 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:51.988725 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:52.015499 1302865 cri.go:89] found id: ""
	I1213 14:58:52.015513 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.015520 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:52.015526 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:52.015589 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:52.041762 1302865 cri.go:89] found id: ""
	I1213 14:58:52.041777 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.041784 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:52.041789 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:52.041850 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:52.068323 1302865 cri.go:89] found id: ""
	I1213 14:58:52.068338 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.068345 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:52.068350 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:52.068415 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:52.106065 1302865 cri.go:89] found id: ""
	I1213 14:58:52.106079 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.106086 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:52.106091 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:52.106160 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:52.140252 1302865 cri.go:89] found id: ""
	I1213 14:58:52.140272 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.140279 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:52.140284 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:52.140343 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:52.167100 1302865 cri.go:89] found id: ""
	I1213 14:58:52.167113 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.167120 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:52.167128 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:52.167138 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:52.226191 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:52.226210 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:52.243667 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:52.243683 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:52.311033 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:52.302537   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.303096   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.304747   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.305085   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.306664   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:52.302537   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.303096   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.304747   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.305085   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.306664   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:52.311046 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:52.311057 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:52.372679 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:52.372703 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:54.903108 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:54.913373 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:54.913436 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:54.938658 1302865 cri.go:89] found id: ""
	I1213 14:58:54.938673 1302865 logs.go:282] 0 containers: []
	W1213 14:58:54.938680 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:54.938686 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:54.938753 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:54.962838 1302865 cri.go:89] found id: ""
	I1213 14:58:54.962851 1302865 logs.go:282] 0 containers: []
	W1213 14:58:54.962866 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:54.962871 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:54.962942 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:54.988758 1302865 cri.go:89] found id: ""
	I1213 14:58:54.988773 1302865 logs.go:282] 0 containers: []
	W1213 14:58:54.988780 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:54.988785 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:54.988855 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:55.021177 1302865 cri.go:89] found id: ""
	I1213 14:58:55.021192 1302865 logs.go:282] 0 containers: []
	W1213 14:58:55.021200 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:55.021206 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:55.021272 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:55.049330 1302865 cri.go:89] found id: ""
	I1213 14:58:55.049344 1302865 logs.go:282] 0 containers: []
	W1213 14:58:55.049356 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:55.049361 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:55.049421 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:55.079835 1302865 cri.go:89] found id: ""
	I1213 14:58:55.079849 1302865 logs.go:282] 0 containers: []
	W1213 14:58:55.079856 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:55.079861 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:55.079920 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:55.107073 1302865 cri.go:89] found id: ""
	I1213 14:58:55.107087 1302865 logs.go:282] 0 containers: []
	W1213 14:58:55.107094 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:55.107102 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:55.107112 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:55.165853 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:55.165871 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:55.183109 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:55.183127 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:55.251642 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:55.242691   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.243211   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.244929   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.245470   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.247181   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:55.242691   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.243211   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.244929   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.245470   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.247181   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:55.251652 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:55.251664 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:55.317380 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:55.317399 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:57.847271 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:57.857537 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:57.857603 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:57.882391 1302865 cri.go:89] found id: ""
	I1213 14:58:57.882405 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.882412 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:57.882417 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:57.882490 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:57.905909 1302865 cri.go:89] found id: ""
	I1213 14:58:57.905923 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.905943 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:57.905948 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:57.906018 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:57.930237 1302865 cri.go:89] found id: ""
	I1213 14:58:57.930252 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.930259 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:57.930264 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:57.930337 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:57.958985 1302865 cri.go:89] found id: ""
	I1213 14:58:57.959014 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.959020 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:57.959031 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:57.959099 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:57.983693 1302865 cri.go:89] found id: ""
	I1213 14:58:57.983707 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.983714 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:57.983719 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:57.983779 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:58.012155 1302865 cri.go:89] found id: ""
	I1213 14:58:58.012170 1302865 logs.go:282] 0 containers: []
	W1213 14:58:58.012178 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:58.012183 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:58.012250 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:58.043700 1302865 cri.go:89] found id: ""
	I1213 14:58:58.043714 1302865 logs.go:282] 0 containers: []
	W1213 14:58:58.043722 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:58.043730 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:58.043742 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:58.105070 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:58.105098 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:58.123698 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:58.123717 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:58.194632 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:58.186276   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.187012   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.188759   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.189247   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.190768   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:58.186276   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.187012   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.188759   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.189247   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.190768   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:58.194642 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:58.194653 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:58.256210 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:58.256230 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:00.787680 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:00.798261 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:00.798326 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:00.826895 1302865 cri.go:89] found id: ""
	I1213 14:59:00.826908 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.826915 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:00.826921 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:00.826980 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:00.851410 1302865 cri.go:89] found id: ""
	I1213 14:59:00.851424 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.851431 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:00.851437 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:00.851510 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:00.876891 1302865 cri.go:89] found id: ""
	I1213 14:59:00.876906 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.876912 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:00.876917 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:00.876975 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:00.900564 1302865 cri.go:89] found id: ""
	I1213 14:59:00.900578 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.900585 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:00.900589 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:00.900647 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:00.925560 1302865 cri.go:89] found id: ""
	I1213 14:59:00.925574 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.925581 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:00.925586 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:00.925647 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:00.954298 1302865 cri.go:89] found id: ""
	I1213 14:59:00.954311 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.954319 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:00.954330 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:00.954388 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:00.980684 1302865 cri.go:89] found id: ""
	I1213 14:59:00.980698 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.980704 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:00.980718 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:00.980731 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:01.048024 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:01.039594   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.040152   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.041748   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.042294   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.043863   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:01.039594   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.040152   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.041748   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.042294   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.043863   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:01.048033 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:01.048044 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:01.110723 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:01.110742 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:01.144966 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:01.144983 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:01.203272 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:01.203301 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:03.722770 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:03.733112 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:03.733170 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:03.761042 1302865 cri.go:89] found id: ""
	I1213 14:59:03.761057 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.761064 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:03.761069 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:03.761130 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:03.789429 1302865 cri.go:89] found id: ""
	I1213 14:59:03.789443 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.789450 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:03.789455 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:03.789521 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:03.816916 1302865 cri.go:89] found id: ""
	I1213 14:59:03.816930 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.816937 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:03.816942 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:03.817001 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:03.844301 1302865 cri.go:89] found id: ""
	I1213 14:59:03.844317 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.844324 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:03.844329 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:03.844388 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:03.873060 1302865 cri.go:89] found id: ""
	I1213 14:59:03.873075 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.873082 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:03.873087 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:03.873147 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:03.910513 1302865 cri.go:89] found id: ""
	I1213 14:59:03.910527 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.910534 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:03.910539 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:03.910601 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:03.938039 1302865 cri.go:89] found id: ""
	I1213 14:59:03.938053 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.938060 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:03.938067 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:03.938077 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:03.993458 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:03.993478 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:04.011140 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:04.011157 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:04.078339 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:04.069502   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.070272   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.072094   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.072604   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.074176   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:04.069502   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.070272   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.072094   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.072604   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.074176   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:04.078350 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:04.078361 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:04.142915 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:04.142934 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:06.673444 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:06.683643 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:06.683703 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:06.708707 1302865 cri.go:89] found id: ""
	I1213 14:59:06.708727 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.708734 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:06.708739 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:06.708799 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:06.734465 1302865 cri.go:89] found id: ""
	I1213 14:59:06.734479 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.734486 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:06.734495 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:06.734584 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:06.759590 1302865 cri.go:89] found id: ""
	I1213 14:59:06.759603 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.759610 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:06.759615 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:06.759674 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:06.785693 1302865 cri.go:89] found id: ""
	I1213 14:59:06.785706 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.785713 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:06.785720 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:06.785777 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:06.810125 1302865 cri.go:89] found id: ""
	I1213 14:59:06.810139 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.810146 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:06.810151 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:06.810215 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:06.835783 1302865 cri.go:89] found id: ""
	I1213 14:59:06.835797 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.835804 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:06.835809 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:06.835869 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:06.860909 1302865 cri.go:89] found id: ""
	I1213 14:59:06.860922 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.860929 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:06.860936 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:06.860946 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:06.916027 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:06.916047 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:06.933118 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:06.933135 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:06.997759 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:06.989536   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.990242   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.991821   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.992353   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.993901   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:06.989536   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.990242   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.991821   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.992353   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.993901   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:06.997769 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:06.997779 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:07.059939 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:07.059961 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:09.591076 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:09.601913 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:09.601975 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:09.626204 1302865 cri.go:89] found id: ""
	I1213 14:59:09.626218 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.626225 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:09.626230 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:09.626289 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:09.653443 1302865 cri.go:89] found id: ""
	I1213 14:59:09.653457 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.653463 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:09.653469 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:09.653531 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:09.678836 1302865 cri.go:89] found id: ""
	I1213 14:59:09.678851 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.678858 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:09.678865 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:09.678924 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:09.704492 1302865 cri.go:89] found id: ""
	I1213 14:59:09.704506 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.704514 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:09.704519 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:09.704581 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:09.733333 1302865 cri.go:89] found id: ""
	I1213 14:59:09.733355 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.733363 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:09.733368 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:09.733431 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:09.758847 1302865 cri.go:89] found id: ""
	I1213 14:59:09.758861 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.758869 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:09.758874 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:09.758946 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:09.785932 1302865 cri.go:89] found id: ""
	I1213 14:59:09.785946 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.785953 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:09.785962 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:09.785973 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:09.842054 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:09.842073 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:09.859249 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:09.859273 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:09.924527 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:09.916673   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.917242   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.918722   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.919219   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.920662   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:09.916673   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.917242   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.918722   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.919219   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.920662   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:09.924536 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:09.924546 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:09.987531 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:09.987550 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:12.517373 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:12.529230 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:12.529292 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:12.558354 1302865 cri.go:89] found id: ""
	I1213 14:59:12.558368 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.558375 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:12.558380 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:12.558439 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:12.585312 1302865 cri.go:89] found id: ""
	I1213 14:59:12.585326 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.585333 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:12.585338 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:12.585396 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:12.613481 1302865 cri.go:89] found id: ""
	I1213 14:59:12.613494 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.613501 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:12.613506 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:12.613564 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:12.636592 1302865 cri.go:89] found id: ""
	I1213 14:59:12.636614 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.636621 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:12.636627 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:12.636694 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:12.660499 1302865 cri.go:89] found id: ""
	I1213 14:59:12.660513 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.660520 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:12.660524 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:12.660591 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:12.684274 1302865 cri.go:89] found id: ""
	I1213 14:59:12.684297 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.684304 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:12.684309 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:12.684377 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:12.715959 1302865 cri.go:89] found id: ""
	I1213 14:59:12.715973 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.715980 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:12.715992 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:12.716003 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:12.779780 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:12.771561   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.772194   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.773845   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.774330   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.775929   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:12.771561   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.772194   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.773845   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.774330   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.775929   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:12.779790 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:12.779801 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:12.840858 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:12.840877 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:12.870238 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:12.870256 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:12.930596 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:12.930615 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:15.449328 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:15.460194 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:15.460255 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:15.484663 1302865 cri.go:89] found id: ""
	I1213 14:59:15.484677 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.484683 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:15.484689 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:15.484799 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:15.513604 1302865 cri.go:89] found id: ""
	I1213 14:59:15.513619 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.513626 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:15.513631 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:15.513692 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:15.543496 1302865 cri.go:89] found id: ""
	I1213 14:59:15.543510 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.543517 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:15.543524 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:15.543596 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:15.576119 1302865 cri.go:89] found id: ""
	I1213 14:59:15.576133 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.576140 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:15.576145 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:15.576207 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:15.600649 1302865 cri.go:89] found id: ""
	I1213 14:59:15.600663 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.600670 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:15.600675 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:15.600743 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:15.624956 1302865 cri.go:89] found id: ""
	I1213 14:59:15.624970 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.624977 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:15.624984 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:15.625045 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:15.649687 1302865 cri.go:89] found id: ""
	I1213 14:59:15.649700 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.649707 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:15.649717 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:15.649728 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:15.711417 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:15.711439 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:15.739859 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:15.739876 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:15.796008 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:15.796027 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:15.813254 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:15.813271 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:15.889756 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:15.881341   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.881741   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.883505   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.883986   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.885525   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:15.881341   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.881741   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.883505   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.883986   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.885525   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:18.390805 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:18.401397 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:18.401458 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:18.426479 1302865 cri.go:89] found id: ""
	I1213 14:59:18.426493 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.426501 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:18.426507 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:18.426569 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:18.451763 1302865 cri.go:89] found id: ""
	I1213 14:59:18.451777 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.451784 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:18.451788 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:18.451846 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:18.475994 1302865 cri.go:89] found id: ""
	I1213 14:59:18.476008 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.476015 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:18.476020 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:18.476080 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:18.500350 1302865 cri.go:89] found id: ""
	I1213 14:59:18.500363 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.500371 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:18.500376 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:18.500436 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:18.524126 1302865 cri.go:89] found id: ""
	I1213 14:59:18.524178 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.524186 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:18.524191 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:18.524251 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:18.552637 1302865 cri.go:89] found id: ""
	I1213 14:59:18.552650 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.552657 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:18.552668 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:18.552735 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:18.576409 1302865 cri.go:89] found id: ""
	I1213 14:59:18.576423 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.576430 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:18.576437 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:18.576448 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:18.632727 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:18.632750 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:18.649857 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:18.649874 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:18.717909 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:18.709647   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.710444   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.712120   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.712687   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.714255   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:18.709647   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.710444   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.712120   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.712687   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.714255   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:18.717920 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:18.717930 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:18.779709 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:18.779731 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:21.307289 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:21.317675 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:21.317738 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:21.357856 1302865 cri.go:89] found id: ""
	I1213 14:59:21.357870 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.357886 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:21.357892 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:21.357952 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:21.383442 1302865 cri.go:89] found id: ""
	I1213 14:59:21.383456 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.383478 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:21.383483 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:21.383550 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:21.410523 1302865 cri.go:89] found id: ""
	I1213 14:59:21.410537 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.410544 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:21.410549 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:21.410606 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:21.437275 1302865 cri.go:89] found id: ""
	I1213 14:59:21.437289 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.437296 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:21.437303 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:21.437361 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:21.460786 1302865 cri.go:89] found id: ""
	I1213 14:59:21.460800 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.460807 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:21.460813 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:21.460871 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:21.484394 1302865 cri.go:89] found id: ""
	I1213 14:59:21.484409 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.484416 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:21.484422 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:21.484481 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:21.513384 1302865 cri.go:89] found id: ""
	I1213 14:59:21.513398 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.513405 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:21.513413 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:21.513423 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:21.568892 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:21.568912 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:21.586837 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:21.586854 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:21.662678 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:21.654029   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.654719   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.656490   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.657142   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.658705   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:21.654029   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.654719   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.656490   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.657142   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.658705   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:21.662688 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:21.662699 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:21.736289 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:21.736318 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:24.267273 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:24.277337 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:24.277401 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:24.300799 1302865 cri.go:89] found id: ""
	I1213 14:59:24.300813 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.300820 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:24.300825 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:24.300883 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:24.329119 1302865 cri.go:89] found id: ""
	I1213 14:59:24.329133 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.329140 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:24.329145 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:24.329207 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:24.359906 1302865 cri.go:89] found id: ""
	I1213 14:59:24.359920 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.359927 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:24.359934 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:24.359993 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:24.388174 1302865 cri.go:89] found id: ""
	I1213 14:59:24.388188 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.388195 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:24.388201 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:24.388265 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:24.416221 1302865 cri.go:89] found id: ""
	I1213 14:59:24.416235 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.416242 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:24.416247 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:24.416306 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:24.441358 1302865 cri.go:89] found id: ""
	I1213 14:59:24.441373 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.441380 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:24.441385 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:24.441444 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:24.465868 1302865 cri.go:89] found id: ""
	I1213 14:59:24.465882 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.465889 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:24.465897 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:24.465907 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:24.522170 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:24.522189 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:24.539720 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:24.539741 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:24.605986 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:24.597621   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.598252   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.599831   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.600201   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.601630   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:24.597621   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.598252   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.599831   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.600201   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.601630   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:24.605996 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:24.606007 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:24.667358 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:24.667377 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:27.195225 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:27.205377 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:27.205438 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:27.229665 1302865 cri.go:89] found id: ""
	I1213 14:59:27.229679 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.229686 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:27.229692 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:27.229755 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:27.253927 1302865 cri.go:89] found id: ""
	I1213 14:59:27.253943 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.253950 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:27.253961 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:27.254022 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:27.277865 1302865 cri.go:89] found id: ""
	I1213 14:59:27.277879 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.277886 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:27.277891 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:27.277949 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:27.305956 1302865 cri.go:89] found id: ""
	I1213 14:59:27.305969 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.305977 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:27.305982 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:27.306041 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:27.330227 1302865 cri.go:89] found id: ""
	I1213 14:59:27.330241 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.330248 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:27.330253 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:27.330312 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:27.367738 1302865 cri.go:89] found id: ""
	I1213 14:59:27.367752 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.367759 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:27.367764 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:27.367823 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:27.400224 1302865 cri.go:89] found id: ""
	I1213 14:59:27.400239 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.400254 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:27.400262 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:27.400272 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:27.428506 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:27.428525 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:27.484755 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:27.484775 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:27.501783 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:27.501800 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:27.568006 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:27.559400   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.559958   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.561857   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.562433   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.564051   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:27.559400   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.559958   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.561857   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.562433   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.564051   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:27.568017 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:27.568029 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:30.130924 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:30.142124 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:30.142187 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:30.168272 1302865 cri.go:89] found id: ""
	I1213 14:59:30.168286 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.168301 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:30.168306 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:30.168379 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:30.198491 1302865 cri.go:89] found id: ""
	I1213 14:59:30.198507 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.198515 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:30.198520 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:30.198583 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:30.224307 1302865 cri.go:89] found id: ""
	I1213 14:59:30.224321 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.224329 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:30.224334 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:30.224398 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:30.252127 1302865 cri.go:89] found id: ""
	I1213 14:59:30.252142 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.252150 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:30.252155 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:30.252216 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:30.277686 1302865 cri.go:89] found id: ""
	I1213 14:59:30.277700 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.277707 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:30.277712 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:30.277773 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:30.302751 1302865 cri.go:89] found id: ""
	I1213 14:59:30.302766 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.302773 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:30.302779 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:30.302864 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:30.331699 1302865 cri.go:89] found id: ""
	I1213 14:59:30.331713 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.331720 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:30.331727 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:30.331741 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:30.384091 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:30.384107 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:30.448178 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:30.448197 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:30.465395 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:30.465414 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:30.525911 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:30.518056   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.518498   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.519701   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.520227   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.521907   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:30.518056   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.518498   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.519701   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.520227   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.521907   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:30.525921 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:30.525931 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:33.088366 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:33.098677 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:33.098747 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:33.123559 1302865 cri.go:89] found id: ""
	I1213 14:59:33.123574 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.123581 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:33.123587 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:33.123648 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:33.149199 1302865 cri.go:89] found id: ""
	I1213 14:59:33.149214 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.149221 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:33.149231 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:33.149294 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:33.174660 1302865 cri.go:89] found id: ""
	I1213 14:59:33.174674 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.174681 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:33.174686 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:33.174747 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:33.199686 1302865 cri.go:89] found id: ""
	I1213 14:59:33.199701 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.199709 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:33.199714 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:33.199776 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:33.223975 1302865 cri.go:89] found id: ""
	I1213 14:59:33.223990 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.223997 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:33.224002 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:33.224062 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:33.248004 1302865 cri.go:89] found id: ""
	I1213 14:59:33.248019 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.248026 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:33.248032 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:33.248099 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:33.272806 1302865 cri.go:89] found id: ""
	I1213 14:59:33.272821 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.272829 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:33.272837 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:33.272847 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:33.300705 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:33.300722 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:33.363767 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:33.363786 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:33.382421 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:33.382440 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:33.450503 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:33.442115   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.442703   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.444236   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.444803   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.446325   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:33.442115   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.442703   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.444236   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.444803   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.446325   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:33.450514 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:33.450526 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:36.015724 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:36.026901 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:36.026965 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:36.053629 1302865 cri.go:89] found id: ""
	I1213 14:59:36.053645 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.053653 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:36.053658 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:36.053722 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:36.080154 1302865 cri.go:89] found id: ""
	I1213 14:59:36.080170 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.080177 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:36.080183 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:36.080247 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:36.105197 1302865 cri.go:89] found id: ""
	I1213 14:59:36.105212 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.105219 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:36.105224 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:36.105284 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:36.129426 1302865 cri.go:89] found id: ""
	I1213 14:59:36.129440 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.129453 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:36.129458 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:36.129516 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:36.157680 1302865 cri.go:89] found id: ""
	I1213 14:59:36.157695 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.157702 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:36.157707 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:36.157768 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:36.186306 1302865 cri.go:89] found id: ""
	I1213 14:59:36.186320 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.186327 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:36.186333 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:36.186404 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:36.210490 1302865 cri.go:89] found id: ""
	I1213 14:59:36.210504 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.210511 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:36.210518 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:36.210528 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:36.265225 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:36.265244 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:36.282625 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:36.282641 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:36.356056 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:36.344168   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.345304   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.346780   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.347080   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.348284   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:36.344168   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.345304   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.346780   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.347080   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.348284   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:36.356066 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:36.356078 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:36.426572 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:36.426595 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:38.953386 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:38.964071 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:38.964149 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:38.987398 1302865 cri.go:89] found id: ""
	I1213 14:59:38.987412 1302865 logs.go:282] 0 containers: []
	W1213 14:59:38.987420 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:38.987426 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:38.987501 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:39.014333 1302865 cri.go:89] found id: ""
	I1213 14:59:39.014348 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.014355 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:39.014360 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:39.014425 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:39.041685 1302865 cri.go:89] found id: ""
	I1213 14:59:39.041699 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.041706 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:39.041711 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:39.041773 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:39.065151 1302865 cri.go:89] found id: ""
	I1213 14:59:39.065165 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.065172 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:39.065177 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:39.065236 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:39.089601 1302865 cri.go:89] found id: ""
	I1213 14:59:39.089614 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.089621 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:39.089629 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:39.089695 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:39.114392 1302865 cri.go:89] found id: ""
	I1213 14:59:39.114406 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.114413 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:39.114418 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:39.114479 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:39.139175 1302865 cri.go:89] found id: ""
	I1213 14:59:39.139188 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.139195 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:39.139204 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:39.139214 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:39.194900 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:39.194920 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:39.212516 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:39.212534 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:39.278353 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:39.270327   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.270899   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.272513   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.272875   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.274310   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:39.270327   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.270899   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.272513   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.272875   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.274310   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:39.278363 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:39.278376 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:39.339218 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:39.339237 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:41.878578 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:41.888870 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:41.888930 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:41.916325 1302865 cri.go:89] found id: ""
	I1213 14:59:41.916339 1302865 logs.go:282] 0 containers: []
	W1213 14:59:41.916346 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:41.916352 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:41.916408 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:41.940631 1302865 cri.go:89] found id: ""
	I1213 14:59:41.940646 1302865 logs.go:282] 0 containers: []
	W1213 14:59:41.940653 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:41.940658 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:41.940721 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:41.964819 1302865 cri.go:89] found id: ""
	I1213 14:59:41.964835 1302865 logs.go:282] 0 containers: []
	W1213 14:59:41.964842 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:41.964847 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:41.964909 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:41.992880 1302865 cri.go:89] found id: ""
	I1213 14:59:41.992895 1302865 logs.go:282] 0 containers: []
	W1213 14:59:41.992902 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:41.992907 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:41.992966 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:42.037181 1302865 cri.go:89] found id: ""
	I1213 14:59:42.037196 1302865 logs.go:282] 0 containers: []
	W1213 14:59:42.037203 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:42.037208 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:42.037272 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:42.066224 1302865 cri.go:89] found id: ""
	I1213 14:59:42.066240 1302865 logs.go:282] 0 containers: []
	W1213 14:59:42.066247 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:42.066253 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:42.066324 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:42.113241 1302865 cri.go:89] found id: ""
	I1213 14:59:42.113259 1302865 logs.go:282] 0 containers: []
	W1213 14:59:42.113267 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:42.113275 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:42.113288 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:42.174660 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:42.174686 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:42.197359 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:42.197391 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:42.287788 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:42.278004   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.278734   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.280708   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.281502   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.282312   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:42.278004   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.278734   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.280708   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.281502   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.282312   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:42.287799 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:42.287810 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:42.353033 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:42.353052 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:44.892059 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:44.902815 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:44.902875 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:44.927725 1302865 cri.go:89] found id: ""
	I1213 14:59:44.927740 1302865 logs.go:282] 0 containers: []
	W1213 14:59:44.927747 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:44.927752 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:44.927815 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:44.957287 1302865 cri.go:89] found id: ""
	I1213 14:59:44.957301 1302865 logs.go:282] 0 containers: []
	W1213 14:59:44.957308 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:44.957313 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:44.957371 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:44.982138 1302865 cri.go:89] found id: ""
	I1213 14:59:44.982153 1302865 logs.go:282] 0 containers: []
	W1213 14:59:44.982160 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:44.982166 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:44.982225 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:45.025671 1302865 cri.go:89] found id: ""
	I1213 14:59:45.025689 1302865 logs.go:282] 0 containers: []
	W1213 14:59:45.025697 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:45.025704 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:45.025777 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:45.070096 1302865 cri.go:89] found id: ""
	I1213 14:59:45.070112 1302865 logs.go:282] 0 containers: []
	W1213 14:59:45.070121 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:45.070126 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:45.070203 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:45.113264 1302865 cri.go:89] found id: ""
	I1213 14:59:45.113281 1302865 logs.go:282] 0 containers: []
	W1213 14:59:45.113289 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:45.113302 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:45.113391 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:45.146027 1302865 cri.go:89] found id: ""
	I1213 14:59:45.146050 1302865 logs.go:282] 0 containers: []
	W1213 14:59:45.146058 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:45.146073 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:45.146084 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:45.242018 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:45.242086 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:45.278598 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:45.278619 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:45.377053 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:45.367099   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.369078   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.369934   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.371774   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.372065   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:45.367099   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.369078   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.369934   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.371774   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.372065   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:45.377063 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:45.377073 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:45.449162 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:45.449183 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:47.980927 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:47.991934 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:47.991998 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:48.022075 1302865 cri.go:89] found id: ""
	I1213 14:59:48.022091 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.022098 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:48.022103 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:48.022169 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:48.052438 1302865 cri.go:89] found id: ""
	I1213 14:59:48.052454 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.052461 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:48.052466 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:48.052543 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:48.077918 1302865 cri.go:89] found id: ""
	I1213 14:59:48.077932 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.077940 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:48.077945 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:48.078008 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:48.107677 1302865 cri.go:89] found id: ""
	I1213 14:59:48.107691 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.107698 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:48.107703 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:48.107803 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:48.134492 1302865 cri.go:89] found id: ""
	I1213 14:59:48.134506 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.134514 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:48.134523 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:48.134616 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:48.159260 1302865 cri.go:89] found id: ""
	I1213 14:59:48.159274 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.159281 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:48.159286 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:48.159368 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:48.184905 1302865 cri.go:89] found id: ""
	I1213 14:59:48.184920 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.184927 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:48.184935 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:48.184945 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:48.240512 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:48.240535 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:48.257663 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:48.257683 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:48.323284 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:48.314914   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.315437   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.317182   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.317842   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.319544   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:48.314914   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.315437   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.317182   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.317842   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.319544   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:48.323295 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:48.323306 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:48.393384 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:48.393403 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:50.925922 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:50.936831 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:50.936895 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:50.963232 1302865 cri.go:89] found id: ""
	I1213 14:59:50.963246 1302865 logs.go:282] 0 containers: []
	W1213 14:59:50.963253 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:50.963258 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:50.963354 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:50.993552 1302865 cri.go:89] found id: ""
	I1213 14:59:50.993566 1302865 logs.go:282] 0 containers: []
	W1213 14:59:50.993572 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:50.993578 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:50.993639 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:51.021945 1302865 cri.go:89] found id: ""
	I1213 14:59:51.021978 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.021986 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:51.021991 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:51.022051 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:51.049002 1302865 cri.go:89] found id: ""
	I1213 14:59:51.049017 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.049024 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:51.049029 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:51.049113 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:51.075979 1302865 cri.go:89] found id: ""
	I1213 14:59:51.075995 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.076003 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:51.076008 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:51.076071 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:51.101633 1302865 cri.go:89] found id: ""
	I1213 14:59:51.101648 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.101656 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:51.101661 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:51.101724 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:51.128983 1302865 cri.go:89] found id: ""
	I1213 14:59:51.128999 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.129007 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:51.129015 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:51.129025 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:51.185511 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:51.185538 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:51.203284 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:51.203306 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:51.265859 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:51.257020   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.257804   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.259431   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.260025   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.261672   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:51.257020   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.257804   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.259431   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.260025   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.261672   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:51.265869 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:51.265880 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:51.328096 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:51.328116 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:53.857136 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:53.867344 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:53.867405 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:53.890843 1302865 cri.go:89] found id: ""
	I1213 14:59:53.890857 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.890864 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:53.890869 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:53.890927 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:53.915236 1302865 cri.go:89] found id: ""
	I1213 14:59:53.915250 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.915258 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:53.915263 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:53.915341 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:53.939500 1302865 cri.go:89] found id: ""
	I1213 14:59:53.939515 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.939523 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:53.939528 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:53.939588 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:53.968671 1302865 cri.go:89] found id: ""
	I1213 14:59:53.968686 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.968693 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:53.968698 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:53.968766 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:53.992869 1302865 cri.go:89] found id: ""
	I1213 14:59:53.992883 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.992895 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:53.992900 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:53.992962 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:54.020494 1302865 cri.go:89] found id: ""
	I1213 14:59:54.020510 1302865 logs.go:282] 0 containers: []
	W1213 14:59:54.020518 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:54.020524 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:54.020587 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:54.047224 1302865 cri.go:89] found id: ""
	I1213 14:59:54.047240 1302865 logs.go:282] 0 containers: []
	W1213 14:59:54.047247 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:54.047256 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:54.047268 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:54.064625 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:54.064643 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:54.131051 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:54.122613   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.123186   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.124913   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.125456   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.126931   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:54.122613   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.123186   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.124913   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.125456   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.126931   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:54.131061 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:54.131072 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:54.198481 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:54.198502 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:54.229657 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:54.229673 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:56.788389 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:56.798893 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:56.798978 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:56.825463 1302865 cri.go:89] found id: ""
	I1213 14:59:56.825479 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.825486 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:56.825491 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:56.825569 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:56.850902 1302865 cri.go:89] found id: ""
	I1213 14:59:56.850916 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.850923 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:56.850928 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:56.850997 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:56.875729 1302865 cri.go:89] found id: ""
	I1213 14:59:56.875743 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.875750 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:56.875755 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:56.875812 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:56.904598 1302865 cri.go:89] found id: ""
	I1213 14:59:56.904612 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.904619 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:56.904624 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:56.904684 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:56.929612 1302865 cri.go:89] found id: ""
	I1213 14:59:56.929626 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.929633 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:56.929639 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:56.929696 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:56.954323 1302865 cri.go:89] found id: ""
	I1213 14:59:56.954337 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.954345 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:56.954350 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:56.954411 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:56.978916 1302865 cri.go:89] found id: ""
	I1213 14:59:56.978930 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.978937 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:56.978944 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:56.978955 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:56.996271 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:56.996290 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:57.067201 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:57.058255   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.058923   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.060482   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.061159   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.062082   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:57.058255   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.058923   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.060482   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.061159   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.062082   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:57.067214 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:57.067227 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:57.129467 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:57.129486 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:57.160756 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:57.160773 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:59.726541 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:59.737128 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:59.737192 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:59.762034 1302865 cri.go:89] found id: ""
	I1213 14:59:59.762050 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.762057 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:59.762063 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:59.762136 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:59.786710 1302865 cri.go:89] found id: ""
	I1213 14:59:59.786724 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.786731 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:59.786738 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:59.786799 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:59.823635 1302865 cri.go:89] found id: ""
	I1213 14:59:59.823649 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.823656 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:59.823661 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:59.823721 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:59.853555 1302865 cri.go:89] found id: ""
	I1213 14:59:59.853568 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.853576 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:59.853580 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:59.853639 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:59.878766 1302865 cri.go:89] found id: ""
	I1213 14:59:59.878781 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.878788 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:59.878793 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:59.878853 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:59.904985 1302865 cri.go:89] found id: ""
	I1213 14:59:59.904999 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.905006 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:59.905012 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:59.905084 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:59.929868 1302865 cri.go:89] found id: ""
	I1213 14:59:59.929882 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.929889 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:59.929896 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:59.929906 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:59.991222 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:59.991242 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:00:00.071719 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 15:00:00.071740 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:00:00.209914 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 15:00:00.209948 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:00:00.266871 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:00:00.266916 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:00:00.606023 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 15:00:00.575459   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.581155   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.582626   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.584864   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.585965   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 15:00:00.575459   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.581155   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.582626   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.584864   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.585965   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:00:03.107691 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:00:03.118897 1302865 kubeadm.go:602] duration metric: took 4m4.796487812s to restartPrimaryControlPlane
	W1213 15:00:03.118966 1302865 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 15:00:03.119044 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 15:00:03.535783 1302865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 15:00:03.550485 1302865 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 15:00:03.558915 1302865 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 15:00:03.558988 1302865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 15:00:03.567415 1302865 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 15:00:03.567426 1302865 kubeadm.go:158] found existing configuration files:
	
	I1213 15:00:03.567481 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 15:00:03.576037 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 15:00:03.576097 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 15:00:03.584074 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 15:00:03.592593 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 15:00:03.592651 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 15:00:03.601062 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 15:00:03.609623 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 15:00:03.609683 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 15:00:03.617551 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 15:00:03.625819 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 15:00:03.625879 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 15:00:03.634092 1302865 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 15:00:03.677773 1302865 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 15:00:03.677823 1302865 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 15:00:03.751455 1302865 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 15:00:03.751520 1302865 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 15:00:03.751555 1302865 kubeadm.go:319] OS: Linux
	I1213 15:00:03.751599 1302865 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 15:00:03.751646 1302865 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 15:00:03.751692 1302865 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 15:00:03.751738 1302865 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 15:00:03.751785 1302865 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 15:00:03.751832 1302865 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 15:00:03.751877 1302865 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 15:00:03.751923 1302865 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 15:00:03.751968 1302865 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 15:00:03.818698 1302865 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 15:00:03.818804 1302865 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 15:00:03.818894 1302865 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 15:00:03.825177 1302865 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 15:00:03.828382 1302865 out.go:252]   - Generating certificates and keys ...
	I1213 15:00:03.828484 1302865 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 15:00:03.828568 1302865 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 15:00:03.828657 1302865 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 15:00:03.828722 1302865 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 15:00:03.828813 1302865 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 15:00:03.828870 1302865 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 15:00:03.828941 1302865 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 15:00:03.829005 1302865 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 15:00:03.829084 1302865 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 15:00:03.829160 1302865 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 15:00:03.829199 1302865 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 15:00:03.829258 1302865 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 15:00:04.177571 1302865 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 15:00:04.342429 1302865 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 15:00:04.668058 1302865 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 15:00:04.760444 1302865 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 15:00:05.013305 1302865 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 15:00:05.014367 1302865 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 15:00:05.019071 1302865 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 15:00:05.022340 1302865 out.go:252]   - Booting up control plane ...
	I1213 15:00:05.022442 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 15:00:05.022520 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 15:00:05.022586 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 15:00:05.042894 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 15:00:05.043146 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 15:00:05.050754 1302865 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 15:00:05.051023 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 15:00:05.051065 1302865 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 15:00:05.191860 1302865 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 15:00:05.191979 1302865 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 15:04:05.190333 1302865 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000252344s
	I1213 15:04:05.190362 1302865 kubeadm.go:319] 
	I1213 15:04:05.190420 1302865 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 15:04:05.190453 1302865 kubeadm.go:319] 	- The kubelet is not running
	I1213 15:04:05.190557 1302865 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 15:04:05.190562 1302865 kubeadm.go:319] 
	I1213 15:04:05.190665 1302865 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 15:04:05.190696 1302865 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 15:04:05.190726 1302865 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 15:04:05.190729 1302865 kubeadm.go:319] 
	I1213 15:04:05.195506 1302865 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 15:04:05.195924 1302865 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 15:04:05.196033 1302865 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 15:04:05.196267 1302865 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 15:04:05.196271 1302865 kubeadm.go:319] 
	I1213 15:04:05.196339 1302865 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 15:04:05.196471 1302865 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000252344s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 15:04:05.196557 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 15:04:05.613572 1302865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 15:04:05.627532 1302865 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 15:04:05.627586 1302865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 15:04:05.635470 1302865 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 15:04:05.635487 1302865 kubeadm.go:158] found existing configuration files:
	
	I1213 15:04:05.635549 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 15:04:05.643770 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 15:04:05.643832 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 15:04:05.651305 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 15:04:05.659066 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 15:04:05.659119 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 15:04:05.666497 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 15:04:05.674867 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 15:04:05.674922 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 15:04:05.682604 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 15:04:05.690488 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 15:04:05.690547 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 15:04:05.697863 1302865 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 15:04:05.737903 1302865 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 15:04:05.738332 1302865 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 15:04:05.824821 1302865 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 15:04:05.824881 1302865 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 15:04:05.824914 1302865 kubeadm.go:319] OS: Linux
	I1213 15:04:05.824955 1302865 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 15:04:05.825000 1302865 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 15:04:05.825043 1302865 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 15:04:05.825103 1302865 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 15:04:05.825147 1302865 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 15:04:05.825200 1302865 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 15:04:05.825250 1302865 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 15:04:05.825294 1302865 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 15:04:05.825336 1302865 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 15:04:05.892296 1302865 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 15:04:05.892418 1302865 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 15:04:05.892526 1302865 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 15:04:05.898143 1302865 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 15:04:05.903540 1302865 out.go:252]   - Generating certificates and keys ...
	I1213 15:04:05.903629 1302865 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 15:04:05.903698 1302865 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 15:04:05.903775 1302865 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 15:04:05.903837 1302865 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 15:04:05.903908 1302865 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 15:04:05.903958 1302865 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 15:04:05.904021 1302865 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 15:04:05.904084 1302865 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 15:04:05.904160 1302865 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 15:04:05.904234 1302865 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 15:04:05.904275 1302865 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 15:04:05.904330 1302865 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 15:04:05.992570 1302865 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 15:04:06.166280 1302865 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 15:04:06.244452 1302865 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 15:04:06.386969 1302865 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 15:04:06.630629 1302865 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 15:04:06.631865 1302865 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 15:04:06.635872 1302865 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 15:04:06.639278 1302865 out.go:252]   - Booting up control plane ...
	I1213 15:04:06.639389 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 15:04:06.639462 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 15:04:06.639523 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 15:04:06.659049 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 15:04:06.659158 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 15:04:06.666661 1302865 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 15:04:06.666977 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 15:04:06.667151 1302865 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 15:04:06.810085 1302865 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 15:04:06.810198 1302865 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 15:08:06.809904 1302865 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000225024s
	I1213 15:08:06.809924 1302865 kubeadm.go:319] 
	I1213 15:08:06.810412 1302865 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 15:08:06.810499 1302865 kubeadm.go:319] 	- The kubelet is not running
	I1213 15:08:06.810921 1302865 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 15:08:06.810931 1302865 kubeadm.go:319] 
	I1213 15:08:06.811146 1302865 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 15:08:06.811211 1302865 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 15:08:06.811291 1302865 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 15:08:06.811302 1302865 kubeadm.go:319] 
	I1213 15:08:06.814720 1302865 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 15:08:06.816724 1302865 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 15:08:06.816881 1302865 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 15:08:06.817212 1302865 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 15:08:06.817216 1302865 kubeadm.go:319] 
	I1213 15:08:06.817309 1302865 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 15:08:06.817355 1302865 kubeadm.go:403] duration metric: took 12m8.532180676s to StartCluster
	I1213 15:08:06.817385 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:08:06.817448 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:08:06.841821 1302865 cri.go:89] found id: ""
	I1213 15:08:06.841835 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.841841 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 15:08:06.841847 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:08:06.841909 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:08:06.865102 1302865 cri.go:89] found id: ""
	I1213 15:08:06.865122 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.865129 1302865 logs.go:284] No container was found matching "etcd"
	I1213 15:08:06.865134 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:08:06.865194 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:08:06.889354 1302865 cri.go:89] found id: ""
	I1213 15:08:06.889369 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.889376 1302865 logs.go:284] No container was found matching "coredns"
	I1213 15:08:06.889381 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:08:06.889444 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:08:06.916987 1302865 cri.go:89] found id: ""
	I1213 15:08:06.917001 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.917008 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 15:08:06.917014 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:08:06.917074 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:08:06.941966 1302865 cri.go:89] found id: ""
	I1213 15:08:06.941980 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.941987 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:08:06.941992 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:08:06.942053 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:08:06.967555 1302865 cri.go:89] found id: ""
	I1213 15:08:06.967570 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.967576 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 15:08:06.967582 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:08:06.967642 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:08:06.990643 1302865 cri.go:89] found id: ""
	I1213 15:08:06.990661 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.990669 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 15:08:06.990677 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 15:08:06.990688 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:08:07.046948 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 15:08:07.046967 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:08:07.064271 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:08:07.064292 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:08:07.156681 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 15:08:07.142614   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.149501   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.150219   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.151350   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.152858   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 15:08:07.142614   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.149501   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.150219   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.151350   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.152858   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:08:07.156693 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 15:08:07.156703 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:08:07.225180 1302865 logs.go:123] Gathering logs for container status ...
	I1213 15:08:07.225205 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 15:08:07.257292 1302865 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000225024s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 15:08:07.257342 1302865 out.go:285] * 
	W1213 15:08:07.257449 1302865 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000225024s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 15:08:07.257519 1302865 out.go:285] * 
	W1213 15:08:07.259853 1302865 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 15:08:07.265906 1302865 out.go:203] 
	W1213 15:08:07.268865 1302865 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000225024s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 15:08:07.268911 1302865 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 15:08:07.268933 1302865 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 15:08:07.272012 1302865 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371055694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371071185Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371111471Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371124460Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371134322Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371145407Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371154235Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371164894Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371186333Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371215091Z" level=info msg="Connect containerd service"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371566107Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.372148338Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.392820866Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.392994105Z" level=info msg="Start subscribing containerd event"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.393210215Z" level=info msg="Start recovering state"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.393152477Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.438865616Z" level=info msg="Start event monitor"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.439053460Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.439140720Z" level=info msg="Start streaming server"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.439202880Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.439258526Z" level=info msg="runtime interface starting up..."
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.439350397Z" level=info msg="starting plugins..."
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.439418867Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 14:55:56 functional-562018 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.441778888Z" level=info msg="containerd successfully booted in 0.092313s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 15:08:10.779950   21206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:10.780683   21206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:10.782279   21206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:10.782863   21206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:10.784403   21206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.759368] overlayfs: idmapped layers are currently not supported
	[Dec13 13:37] overlayfs: idmapped layers are currently not supported
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 15:08:10 up  6:50,  0 user,  load average: 0.01, 0.13, 0.43
	Linux functional-562018 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 15:08:07 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:08:07 functional-562018 kubelet[20988]: E1213 15:08:07.907124   20988 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:08:07 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:08:07 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:08:08 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 13 15:08:08 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:08:08 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:08:08 functional-562018 kubelet[21082]: E1213 15:08:08.674862   21082 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:08:08 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:08:08 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:08:09 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 13 15:08:09 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:08:09 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:08:09 functional-562018 kubelet[21095]: E1213 15:08:09.393357   21095 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:08:09 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:08:09 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:08:10 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 13 15:08:10 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:08:10 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:08:10 functional-562018 kubelet[21124]: E1213 15:08:10.167047   21124 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:08:10 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:08:10 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:08:10 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 13 15:08:10 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:08:10 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018: exit status 2 (361.307857ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-562018" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-562018 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-562018 apply -f testdata/invalidsvc.yaml: exit status 1 (53.380063ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-562018 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-562018 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-562018 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-562018 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-562018 --alsologtostderr -v=1] stderr:
I1213 15:10:15.140884 1321764 out.go:360] Setting OutFile to fd 1 ...
I1213 15:10:15.141024 1321764 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 15:10:15.141032 1321764 out.go:374] Setting ErrFile to fd 2...
I1213 15:10:15.141038 1321764 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 15:10:15.141295 1321764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
I1213 15:10:15.141561 1321764 mustload.go:66] Loading cluster: functional-562018
I1213 15:10:15.141982 1321764 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 15:10:15.142488 1321764 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
I1213 15:10:15.160028 1321764 host.go:66] Checking if "functional-562018" exists ...
I1213 15:10:15.160386 1321764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 15:10:15.218656 1321764 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 15:10:15.208626223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1213 15:10:15.218781 1321764 api_server.go:166] Checking apiserver status ...
I1213 15:10:15.218848 1321764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1213 15:10:15.218896 1321764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
I1213 15:10:15.236359 1321764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
W1213 15:10:15.346559 1321764 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1213 15:10:15.349657 1321764 out.go:179] * The control-plane node functional-562018 apiserver is not running: (state=Stopped)
I1213 15:10:15.352525 1321764 out.go:179]   To start a cluster, run: "minikube start -p functional-562018"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-562018
helpers_test.go:244: (dbg) docker inspect functional-562018:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
	        "Created": "2025-12-13T14:41:15.451086653Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1291703,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T14:41:15.527927053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hostname",
	        "HostsPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hosts",
	        "LogPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648-json.log",
	        "Name": "/functional-562018",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-562018:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-562018",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
	                "LowerDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-562018",
	                "Source": "/var/lib/docker/volumes/functional-562018/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-562018",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-562018",
	                "name.minikube.sigs.k8s.io": "functional-562018",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f4b22297a29553cdd0dbc4eaa766abcdb1e67465ee18f1e2f5f9917dc8cf6d08",
	            "SandboxKey": "/var/run/docker/netns/f4b22297a295",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33918"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33919"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33922"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33920"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33921"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-562018": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:f3:95:ff:30:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd2e7aed753870546b4bccdeed8073ad5795c14334e906d06b964f72dc448c38",
	                    "EndpointID": "a0e947c1e40f773105c811b67b7d1d63f19d3a20060380bbde944bf9bfe39be5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-562018",
	                        "2cd1277ca783"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018: exit status 2 (327.119425ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons    │ functional-562018 addons list -o json                                                                                                               │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ ssh       │ functional-562018 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ mount     │ -p functional-562018 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2700642734/001:/mount-9p --alsologtostderr -v=1              │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ ssh       │ functional-562018 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ ssh       │ functional-562018 ssh -- ls -la /mount-9p                                                                                                           │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ ssh       │ functional-562018 ssh cat /mount-9p/test-1765638608439849039                                                                                        │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ ssh       │ functional-562018 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ ssh       │ functional-562018 ssh sudo umount -f /mount-9p                                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ mount     │ -p functional-562018 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3096654653/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ ssh       │ functional-562018 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ ssh       │ functional-562018 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ ssh       │ functional-562018 ssh -- ls -la /mount-9p                                                                                                           │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ ssh       │ functional-562018 ssh sudo umount -f /mount-9p                                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ mount     │ -p functional-562018 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1452398466/001:/mount2 --alsologtostderr -v=1                │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ mount     │ -p functional-562018 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1452398466/001:/mount1 --alsologtostderr -v=1                │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ mount     │ -p functional-562018 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1452398466/001:/mount3 --alsologtostderr -v=1                │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ ssh       │ functional-562018 ssh findmnt -T /mount1                                                                                                            │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ ssh       │ functional-562018 ssh findmnt -T /mount1                                                                                                            │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ ssh       │ functional-562018 ssh findmnt -T /mount2                                                                                                            │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ ssh       │ functional-562018 ssh findmnt -T /mount3                                                                                                            │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ mount     │ -p functional-562018 --kill=true                                                                                                                    │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ start     │ -p functional-562018 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ start     │ -p functional-562018 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ start     │ -p functional-562018 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-562018 --alsologtostderr -v=1                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 15:10:14
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 15:10:14.924491 1321719 out.go:360] Setting OutFile to fd 1 ...
	I1213 15:10:14.924614 1321719 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:10:14.924624 1321719 out.go:374] Setting ErrFile to fd 2...
	I1213 15:10:14.924629 1321719 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:10:14.925025 1321719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 15:10:14.925420 1321719 out.go:368] Setting JSON to false
	I1213 15:10:14.926253 1321719 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24764,"bootTime":1765613851,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 15:10:14.926325 1321719 start.go:143] virtualization:  
	I1213 15:10:14.929616 1321719 out.go:179] * [functional-562018] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1213 15:10:14.933349 1321719 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 15:10:14.933442 1321719 notify.go:221] Checking for updates...
	I1213 15:10:14.939184 1321719 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 15:10:14.942058 1321719 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 15:10:14.944885 1321719 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 15:10:14.947818 1321719 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 15:10:14.950726 1321719 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 15:10:14.954103 1321719 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 15:10:14.954711 1321719 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 15:10:14.977567 1321719 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 15:10:14.977713 1321719 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 15:10:15.066292 1321719 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 15:10:15.055562981 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 15:10:15.066417 1321719 docker.go:319] overlay module found
	I1213 15:10:15.069497 1321719 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1213 15:10:15.072536 1321719 start.go:309] selected driver: docker
	I1213 15:10:15.072573 1321719 start.go:927] validating driver "docker" against &{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 15:10:15.072699 1321719 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 15:10:15.076744 1321719 out.go:203] 
	W1213 15:10:15.079852 1321719 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 15:10:15.082795 1321719 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 15:08:16 functional-562018 containerd[9685]: time="2025-12-13T15:08:16.614640453Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-562018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:08:17 functional-562018 containerd[9685]: time="2025-12-13T15:08:17.594699770Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-562018\""
	Dec 13 15:08:17 functional-562018 containerd[9685]: time="2025-12-13T15:08:17.603547510Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-562018\""
	Dec 13 15:08:17 functional-562018 containerd[9685]: time="2025-12-13T15:08:17.603653813Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 13 15:08:17 functional-562018 containerd[9685]: time="2025-12-13T15:08:17.607908789Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-562018\" returns successfully"
	Dec 13 15:08:17 functional-562018 containerd[9685]: time="2025-12-13T15:08:17.989472917Z" level=info msg="No images store for sha256:3fb21f6d7fe9fd863c3548cb9498b8e552e958f0a50edc71e300f38a249a8021"
	Dec 13 15:08:17 functional-562018 containerd[9685]: time="2025-12-13T15:08:17.991836514Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-562018\""
	Dec 13 15:08:17 functional-562018 containerd[9685]: time="2025-12-13T15:08:17.999814739Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:08:18 functional-562018 containerd[9685]: time="2025-12-13T15:08:18.000343226Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-562018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.424371600Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-562018\""
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.427299481Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-562018\""
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.429590825Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.438723433Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-562018\" returns successfully"
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.738866011Z" level=info msg="No images store for sha256:3fb21f6d7fe9fd863c3548cb9498b8e552e958f0a50edc71e300f38a249a8021"
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.741155321Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-562018\""
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.748278873Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.748608153Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-562018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:08:20 functional-562018 containerd[9685]: time="2025-12-13T15:08:20.747498767Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-562018\""
	Dec 13 15:08:20 functional-562018 containerd[9685]: time="2025-12-13T15:08:20.750124437Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-562018\""
	Dec 13 15:08:20 functional-562018 containerd[9685]: time="2025-12-13T15:08:20.752467907Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 13 15:08:20 functional-562018 containerd[9685]: time="2025-12-13T15:08:20.765182475Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-562018\" returns successfully"
	Dec 13 15:08:21 functional-562018 containerd[9685]: time="2025-12-13T15:08:21.628092462Z" level=info msg="No images store for sha256:bffe89cb060c176804db60dc616d4e1117e4c9cbe423e0274bf52a76645edb04"
	Dec 13 15:08:21 functional-562018 containerd[9685]: time="2025-12-13T15:08:21.630292191Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-562018\""
	Dec 13 15:08:21 functional-562018 containerd[9685]: time="2025-12-13T15:08:21.637226743Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:08:21 functional-562018 containerd[9685]: time="2025-12-13T15:08:21.637535149Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-562018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 15:10:16.411874   23831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:10:16.412260   23831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:10:16.413912   23831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:10:16.414460   23831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:10:16.416067   23831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.759368] overlayfs: idmapped layers are currently not supported
	[Dec13 13:37] overlayfs: idmapped layers are currently not supported
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 15:10:16 up  6:52,  0 user,  load average: 0.54, 0.41, 0.51
	Linux functional-562018 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 15:10:13 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:10:13 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 489.
	Dec 13 15:10:13 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:10:13 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:10:13 functional-562018 kubelet[23693]: E1213 15:10:13.894585   23693 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:10:13 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:10:13 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:10:14 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 490.
	Dec 13 15:10:14 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:10:14 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:10:14 functional-562018 kubelet[23714]: E1213 15:10:14.654320   23714 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:10:14 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:10:14 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:10:15 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 491.
	Dec 13 15:10:15 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:10:15 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:10:15 functional-562018 kubelet[23727]: E1213 15:10:15.385609   23727 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:10:15 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:10:15 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:10:16 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 492.
	Dec 13 15:10:16 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:10:16 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:10:16 functional-562018 kubelet[23759]: E1213 15:10:16.144790   23759 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:10:16 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:10:16 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018: exit status 2 (311.385697ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-562018" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562018 status: exit status 2 (304.627316ms)

                                                
                                                
-- stdout --
	functional-562018
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-562018 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562018 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (310.051798ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-562018 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562018 status -o json: exit status 2 (339.945726ms)

                                                
                                                
-- stdout --
	{"Name":"functional-562018","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-562018 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-562018
helpers_test.go:244: (dbg) docker inspect functional-562018:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
	        "Created": "2025-12-13T14:41:15.451086653Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1291703,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T14:41:15.527927053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hostname",
	        "HostsPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hosts",
	        "LogPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648-json.log",
	        "Name": "/functional-562018",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-562018:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-562018",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
	                "LowerDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-562018",
	                "Source": "/var/lib/docker/volumes/functional-562018/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-562018",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-562018",
	                "name.minikube.sigs.k8s.io": "functional-562018",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f4b22297a29553cdd0dbc4eaa766abcdb1e67465ee18f1e2f5f9917dc8cf6d08",
	            "SandboxKey": "/var/run/docker/netns/f4b22297a295",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33918"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33919"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33922"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33920"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33921"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-562018": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:f3:95:ff:30:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd2e7aed753870546b4bccdeed8073ad5795c14334e906d06b964f72dc448c38",
	                    "EndpointID": "a0e947c1e40f773105c811b67b7d1d63f19d3a20060380bbde944bf9bfe39be5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-562018",
	                        "2cd1277ca783"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018: exit status 2 (315.571681ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-562018 ssh sudo cat /usr/share/ca-certificates/1252934.pem                                                                                           │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ ssh     │ functional-562018 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ image   │ functional-562018 image ls                                                                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ ssh     │ functional-562018 ssh sudo cat /etc/ssl/certs/12529342.pem                                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ image   │ functional-562018 image save kicbase/echo-server:functional-562018 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ ssh     │ functional-562018 ssh sudo cat /usr/share/ca-certificates/12529342.pem                                                                                          │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ image   │ functional-562018 image rm kicbase/echo-server:functional-562018 --alsologtostderr                                                                              │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ ssh     │ functional-562018 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ image   │ functional-562018 image ls                                                                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ ssh     │ functional-562018 ssh sudo cat /etc/test/nested/copy/1252934/hosts                                                                                              │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ image   │ functional-562018 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ service │ functional-562018 service list                                                                                                                                  │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │                     │
	│ image   │ functional-562018 image ls                                                                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ service │ functional-562018 service list -o json                                                                                                                          │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │                     │
	│ image   │ functional-562018 image save --daemon kicbase/echo-server:functional-562018 --alsologtostderr                                                                   │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ service │ functional-562018 service --namespace=default --https --url hello-node                                                                                          │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │                     │
	│ service │ functional-562018 service hello-node --url --format={{.IP}}                                                                                                     │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │                     │
	│ ssh     │ functional-562018 ssh echo hello                                                                                                                                │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ service │ functional-562018 service hello-node --url                                                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │                     │
	│ ssh     │ functional-562018 ssh cat /etc/hostname                                                                                                                         │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ tunnel  │ functional-562018 tunnel --alsologtostderr                                                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │                     │
	│ tunnel  │ functional-562018 tunnel --alsologtostderr                                                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │                     │
	│ tunnel  │ functional-562018 tunnel --alsologtostderr                                                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │                     │
	│ addons  │ functional-562018 addons list                                                                                                                                   │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ addons  │ functional-562018 addons list -o json                                                                                                                           │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 14:55:53
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 14:55:53.719613 1302865 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:55:53.719728 1302865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:55:53.719732 1302865 out.go:374] Setting ErrFile to fd 2...
	I1213 14:55:53.719735 1302865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:55:53.719985 1302865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 14:55:53.720335 1302865 out.go:368] Setting JSON to false
	I1213 14:55:53.721190 1302865 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23903,"bootTime":1765613851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 14:55:53.721260 1302865 start.go:143] virtualization:  
	I1213 14:55:53.724694 1302865 out.go:179] * [functional-562018] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 14:55:53.728380 1302865 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 14:55:53.728496 1302865 notify.go:221] Checking for updates...
	I1213 14:55:53.734124 1302865 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:55:53.736928 1302865 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:55:53.739728 1302865 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 14:55:53.742545 1302865 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 14:55:53.745302 1302865 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 14:55:53.748618 1302865 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:55:53.748719 1302865 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:55:53.782535 1302865 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 14:55:53.782649 1302865 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:55:53.845662 1302865 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 14:55:53.829246857 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:55:53.845758 1302865 docker.go:319] overlay module found
	I1213 14:55:53.849849 1302865 out.go:179] * Using the docker driver based on existing profile
	I1213 14:55:53.852762 1302865 start.go:309] selected driver: docker
	I1213 14:55:53.852774 1302865 start.go:927] validating driver "docker" against &{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:55:53.852875 1302865 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 14:55:53.852984 1302865 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:55:53.929886 1302865 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 14:55:53.921020705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:55:53.930294 1302865 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 14:55:53.930319 1302865 cni.go:84] Creating CNI manager for ""
	I1213 14:55:53.930367 1302865 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 14:55:53.930406 1302865 start.go:353] cluster config:
	{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:55:53.933662 1302865 out.go:179] * Starting "functional-562018" primary control-plane node in "functional-562018" cluster
	I1213 14:55:53.936743 1302865 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 14:55:53.939760 1302865 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 14:55:53.942676 1302865 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 14:55:53.942716 1302865 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 14:55:53.942732 1302865 cache.go:65] Caching tarball of preloaded images
	I1213 14:55:53.942759 1302865 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 14:55:53.942845 1302865 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 14:55:53.942855 1302865 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 14:55:53.942970 1302865 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/config.json ...
	I1213 14:55:53.962568 1302865 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 14:55:53.962579 1302865 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 14:55:53.962597 1302865 cache.go:243] Successfully downloaded all kic artifacts
	I1213 14:55:53.962628 1302865 start.go:360] acquireMachinesLock for functional-562018: {Name:mk6a7956e4fce5d8e0f4d6fe039ab67ad6cd688b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 14:55:53.962689 1302865 start.go:364] duration metric: took 45.029µs to acquireMachinesLock for "functional-562018"
	I1213 14:55:53.962707 1302865 start.go:96] Skipping create...Using existing machine configuration
	I1213 14:55:53.962711 1302865 fix.go:54] fixHost starting: 
	I1213 14:55:53.962972 1302865 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:55:53.980087 1302865 fix.go:112] recreateIfNeeded on functional-562018: state=Running err=<nil>
	W1213 14:55:53.980106 1302865 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 14:55:53.983261 1302865 out.go:252] * Updating the running docker "functional-562018" container ...
	I1213 14:55:53.983285 1302865 machine.go:94] provisionDockerMachine start ...
	I1213 14:55:53.983388 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.000833 1302865 main.go:143] libmachine: Using SSH client type: native
	I1213 14:55:54.001170 1302865 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:55:54.001177 1302865 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 14:55:54.155013 1302865 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-562018
	
	I1213 14:55:54.155027 1302865 ubuntu.go:182] provisioning hostname "functional-562018"
	I1213 14:55:54.155091 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.172804 1302865 main.go:143] libmachine: Using SSH client type: native
	I1213 14:55:54.173100 1302865 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:55:54.173108 1302865 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-562018 && echo "functional-562018" | sudo tee /etc/hostname
	I1213 14:55:54.335232 1302865 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-562018
	
	I1213 14:55:54.335302 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.353315 1302865 main.go:143] libmachine: Using SSH client type: native
	I1213 14:55:54.353625 1302865 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:55:54.353638 1302865 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-562018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-562018/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-562018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 14:55:54.503602 1302865 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 14:55:54.503618 1302865 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 14:55:54.503648 1302865 ubuntu.go:190] setting up certificates
	I1213 14:55:54.503664 1302865 provision.go:84] configureAuth start
	I1213 14:55:54.503732 1302865 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
	I1213 14:55:54.520737 1302865 provision.go:143] copyHostCerts
	I1213 14:55:54.520806 1302865 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 14:55:54.520813 1302865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 14:55:54.520892 1302865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 14:55:54.520992 1302865 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 14:55:54.520996 1302865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 14:55:54.521022 1302865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 14:55:54.521079 1302865 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 14:55:54.521082 1302865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 14:55:54.521105 1302865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 14:55:54.521157 1302865 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.functional-562018 san=[127.0.0.1 192.168.49.2 functional-562018 localhost minikube]
	I1213 14:55:54.737947 1302865 provision.go:177] copyRemoteCerts
	I1213 14:55:54.738007 1302865 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 14:55:54.738047 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.756271 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:54.864730 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 14:55:54.885080 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 14:55:54.903456 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 14:55:54.921228 1302865 provision.go:87] duration metric: took 417.552003ms to configureAuth
	I1213 14:55:54.921245 1302865 ubuntu.go:206] setting minikube options for container-runtime
	I1213 14:55:54.921445 1302865 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:55:54.921451 1302865 machine.go:97] duration metric: took 938.161957ms to provisionDockerMachine
	I1213 14:55:54.921458 1302865 start.go:293] postStartSetup for "functional-562018" (driver="docker")
	I1213 14:55:54.921469 1302865 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 14:55:54.921526 1302865 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 14:55:54.921569 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.939146 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:55.043619 1302865 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 14:55:55.047116 1302865 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 14:55:55.047136 1302865 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 14:55:55.047147 1302865 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 14:55:55.047201 1302865 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 14:55:55.047279 1302865 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 14:55:55.047377 1302865 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts -> hosts in /etc/test/nested/copy/1252934
	I1213 14:55:55.047422 1302865 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1252934
	I1213 14:55:55.055022 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 14:55:55.072651 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts --> /etc/test/nested/copy/1252934/hosts (40 bytes)
	I1213 14:55:55.090146 1302865 start.go:296] duration metric: took 168.672467ms for postStartSetup
	I1213 14:55:55.090222 1302865 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 14:55:55.090277 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:55.110519 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:55.212743 1302865 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 14:55:55.217665 1302865 fix.go:56] duration metric: took 1.254946074s for fixHost
	I1213 14:55:55.217694 1302865 start.go:83] releasing machines lock for "functional-562018", held for 1.254985507s
	I1213 14:55:55.217771 1302865 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
	I1213 14:55:55.234536 1302865 ssh_runner.go:195] Run: cat /version.json
	I1213 14:55:55.234580 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:55.234841 1302865 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 14:55:55.234904 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:55.258034 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:55.263005 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:55.363489 1302865 ssh_runner.go:195] Run: systemctl --version
	I1213 14:55:55.466608 1302865 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 14:55:55.470983 1302865 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 14:55:55.471044 1302865 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 14:55:55.478685 1302865 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 14:55:55.478700 1302865 start.go:496] detecting cgroup driver to use...
	I1213 14:55:55.478730 1302865 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 14:55:55.478776 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 14:55:55.494349 1302865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 14:55:55.507276 1302865 docker.go:218] disabling cri-docker service (if available) ...
	I1213 14:55:55.507360 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 14:55:55.523374 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 14:55:55.537388 1302865 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 14:55:55.656533 1302865 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 14:55:55.769801 1302865 docker.go:234] disabling docker service ...
	I1213 14:55:55.769857 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 14:55:55.784548 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 14:55:55.797129 1302865 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 14:55:55.915684 1302865 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 14:55:56.027646 1302865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 14:55:56.050399 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 14:55:56.066005 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 14:55:56.076093 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 14:55:56.085556 1302865 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 14:55:56.085627 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 14:55:56.094545 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 14:55:56.104197 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 14:55:56.114269 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 14:55:56.123172 1302865 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 14:55:56.132178 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 14:55:56.141074 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 14:55:56.150470 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 14:55:56.160063 1302865 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 14:55:56.167903 1302865 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 14:55:56.175659 1302865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:55:56.295844 1302865 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 14:55:56.441580 1302865 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 14:55:56.441654 1302865 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 14:55:56.445551 1302865 start.go:564] Will wait 60s for crictl version
	I1213 14:55:56.445607 1302865 ssh_runner.go:195] Run: which crictl
	I1213 14:55:56.449128 1302865 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 14:55:56.473587 1302865 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 14:55:56.473654 1302865 ssh_runner.go:195] Run: containerd --version
	I1213 14:55:56.493885 1302865 ssh_runner.go:195] Run: containerd --version
	I1213 14:55:56.518032 1302865 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 14:55:56.521077 1302865 cli_runner.go:164] Run: docker network inspect functional-562018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 14:55:56.537369 1302865 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 14:55:56.544433 1302865 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 14:55:56.547248 1302865 kubeadm.go:884] updating cluster {Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 14:55:56.547410 1302865 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 14:55:56.547500 1302865 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:55:56.572443 1302865 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 14:55:56.572458 1302865 containerd.go:534] Images already preloaded, skipping extraction
	I1213 14:55:56.572525 1302865 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:55:56.603700 1302865 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 14:55:56.603712 1302865 cache_images.go:86] Images are preloaded, skipping loading
	I1213 14:55:56.603718 1302865 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 14:55:56.603824 1302865 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-562018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 14:55:56.603888 1302865 ssh_runner.go:195] Run: sudo crictl info
	I1213 14:55:56.640969 1302865 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 14:55:56.640988 1302865 cni.go:84] Creating CNI manager for ""
	I1213 14:55:56.640997 1302865 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 14:55:56.641011 1302865 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 14:55:56.641033 1302865 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-562018 NodeName:functional-562018 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 14:55:56.641163 1302865 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-562018"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 14:55:56.641238 1302865 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 14:55:56.649442 1302865 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 14:55:56.649507 1302865 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 14:55:56.657006 1302865 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 14:55:56.669728 1302865 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 14:55:56.682334 1302865 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1213 14:55:56.694926 1302865 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 14:55:56.698838 1302865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:55:56.837238 1302865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:55:57.584722 1302865 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018 for IP: 192.168.49.2
	I1213 14:55:57.584733 1302865 certs.go:195] generating shared ca certs ...
	I1213 14:55:57.584753 1302865 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:55:57.584897 1302865 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 14:55:57.584947 1302865 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 14:55:57.584954 1302865 certs.go:257] generating profile certs ...
	I1213 14:55:57.585039 1302865 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key
	I1213 14:55:57.585090 1302865 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key.d0505aee
	I1213 14:55:57.585124 1302865 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key
	I1213 14:55:57.585235 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 14:55:57.585272 1302865 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 14:55:57.585280 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 14:55:57.585307 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 14:55:57.585330 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 14:55:57.585354 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 14:55:57.585399 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 14:55:57.591362 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 14:55:57.616349 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 14:55:57.635438 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 14:55:57.655371 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 14:55:57.672503 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 14:55:57.689594 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 14:55:57.706530 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 14:55:57.723556 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 14:55:57.740287 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 14:55:57.757304 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 14:55:57.774649 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 14:55:57.792687 1302865 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 14:55:57.805822 1302865 ssh_runner.go:195] Run: openssl version
	I1213 14:55:57.812225 1302865 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 14:55:57.819503 1302865 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 14:55:57.826726 1302865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 14:55:57.830446 1302865 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 14:55:57.830502 1302865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 14:55:57.871253 1302865 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 14:55:57.878814 1302865 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:55:57.886029 1302865 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 14:55:57.893560 1302865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:55:57.897283 1302865 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:55:57.897343 1302865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:55:57.938225 1302865 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 14:55:57.946132 1302865 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 14:55:57.953318 1302865 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 14:55:57.960779 1302865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 14:55:57.964616 1302865 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 14:55:57.964674 1302865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 14:55:58.013928 1302865 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 14:55:58.021993 1302865 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 14:55:58.026144 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 14:55:58.067380 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 14:55:58.114887 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 14:55:58.156572 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 14:55:58.199117 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 14:55:58.241809 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 14:55:58.285184 1302865 kubeadm.go:401] StartCluster: {Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:55:58.285266 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 14:55:58.285327 1302865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 14:55:58.314259 1302865 cri.go:89] found id: ""
	I1213 14:55:58.314322 1302865 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 14:55:58.322386 1302865 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 14:55:58.322396 1302865 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 14:55:58.322453 1302865 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 14:55:58.329880 1302865 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:55:58.330377 1302865 kubeconfig.go:125] found "functional-562018" server: "https://192.168.49.2:8441"
	I1213 14:55:58.331729 1302865 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 14:55:58.341644 1302865 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 14:41:23.876598830 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 14:55:56.689854034 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 14:55:58.341663 1302865 kubeadm.go:1161] stopping kube-system containers ...
	I1213 14:55:58.341678 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1213 14:55:58.341741 1302865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 14:55:58.374972 1302865 cri.go:89] found id: ""
	I1213 14:55:58.375050 1302865 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 14:55:58.396016 1302865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 14:55:58.404525 1302865 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 13 14:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 13 14:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 13 14:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 13 14:45 /etc/kubernetes/scheduler.conf
	
	I1213 14:55:58.404584 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 14:55:58.412946 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 14:55:58.420580 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:55:58.420635 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 14:55:58.428221 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 14:55:58.435971 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:55:58.436028 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 14:55:58.443530 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 14:55:58.451393 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:55:58.451448 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 14:55:58.458854 1302865 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 14:55:58.466605 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:55:58.520413 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:55:59.744405 1302865 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.223964216s)
	I1213 14:55:59.744467 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:55:59.946438 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:56:00.013725 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:56:00.113319 1302865 api_server.go:52] waiting for apiserver process to appear ...
	I1213 14:56:00.114955 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:00.613579 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:01.114177 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:01.613592 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:02.113571 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:02.613593 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:03.113840 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:03.613569 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:04.114249 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:04.613852 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:05.113537 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:05.613696 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:06.113540 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:06.614342 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:07.113785 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:07.613457 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:08.114283 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:08.613596 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:09.113597 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:09.614352 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:10.114532 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:10.613598 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:11.114365 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:11.614158 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:12.113539 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:12.613531 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:13.113591 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:13.613463 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:14.114527 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:14.614435 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:15.113510 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:15.614373 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:16.114388 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:16.613507 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:17.113567 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:17.614369 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:18.113844 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:18.613714 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:19.114404 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:19.614169 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:20.114541 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:20.613650 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:21.113498 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:21.613589 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:22.113591 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:22.613522 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:23.114240 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:23.614475 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:24.113893 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:24.614467 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:25.114531 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:25.613526 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:26.114346 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:26.614504 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:27.113518 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:27.614286 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:28.114181 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:28.613958 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:29.113601 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:29.614343 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:30.114309 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:30.614109 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:31.114271 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:31.613510 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:32.114261 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:32.614199 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:33.114060 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:33.614237 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:34.114371 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:34.614467 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:35.114182 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:35.613614 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:36.113542 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:36.614402 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:37.114233 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:37.613522 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:38.113599 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:38.613584 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:39.114045 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:39.613569 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:40.113521 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:40.613504 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:41.113503 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:41.614239 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:42.113697 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:42.614293 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:43.113591 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:43.614231 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:44.114413 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:44.614537 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:45.114187 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:45.613592 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:46.113667 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:46.613755 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:47.113597 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:47.614262 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:48.113463 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:48.613700 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:49.113578 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:49.614192 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:50.113501 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:50.613492 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:51.114160 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:51.613924 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:52.114491 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:52.613532 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:53.113608 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:53.613620 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:54.114432 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:54.614359 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:55.114461 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:55.614143 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:56.113587 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:56.614451 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:57.113619 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:57.613622 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:58.113547 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:58.614429 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:59.113617 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:59.613534 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:00.124126 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:00.124233 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:00.200982 1302865 cri.go:89] found id: ""
	I1213 14:57:00.201003 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.201011 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:00.201018 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:00.201100 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:00.237755 1302865 cri.go:89] found id: ""
	I1213 14:57:00.237770 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.237778 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:00.237783 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:00.237861 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:00.301679 1302865 cri.go:89] found id: ""
	I1213 14:57:00.301694 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.301702 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:00.301709 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:00.301778 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:00.347228 1302865 cri.go:89] found id: ""
	I1213 14:57:00.347243 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.347251 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:00.347256 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:00.347356 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:00.376454 1302865 cri.go:89] found id: ""
	I1213 14:57:00.376471 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.376479 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:00.376485 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:00.376555 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:00.408967 1302865 cri.go:89] found id: ""
	I1213 14:57:00.408982 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.408989 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:00.408995 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:00.409059 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:00.437494 1302865 cri.go:89] found id: ""
	I1213 14:57:00.437509 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.437516 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:00.437524 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:00.437534 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:00.493840 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:00.493860 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:00.511767 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:00.511785 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:00.579231 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:00.570332   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.571045   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.572740   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.573345   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.575117   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:00.570332   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.571045   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.572740   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.573345   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.575117   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:00.579242 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:00.579253 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:00.641446 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:00.641467 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:03.171486 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:03.181873 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:03.181935 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:03.212211 1302865 cri.go:89] found id: ""
	I1213 14:57:03.212226 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.212232 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:03.212244 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:03.212304 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:03.237934 1302865 cri.go:89] found id: ""
	I1213 14:57:03.237949 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.237957 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:03.237962 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:03.238034 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:03.263822 1302865 cri.go:89] found id: ""
	I1213 14:57:03.263836 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.263843 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:03.263848 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:03.263910 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:03.289876 1302865 cri.go:89] found id: ""
	I1213 14:57:03.289890 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.289898 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:03.289902 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:03.289965 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:03.317957 1302865 cri.go:89] found id: ""
	I1213 14:57:03.317972 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.317979 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:03.318000 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:03.318060 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:03.346780 1302865 cri.go:89] found id: ""
	I1213 14:57:03.346793 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.346800 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:03.346805 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:03.346864 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:03.371472 1302865 cri.go:89] found id: ""
	I1213 14:57:03.371485 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.371493 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:03.371501 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:03.371512 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:03.399569 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:03.399588 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:03.454307 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:03.454327 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:03.472933 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:03.472951 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:03.538528 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:03.529465   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.530241   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.532007   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.532517   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.534246   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:03.529465   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.530241   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.532007   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.532517   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.534246   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:03.538539 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:03.538550 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:06.101738 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:06.112716 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:06.112778 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:06.139740 1302865 cri.go:89] found id: ""
	I1213 14:57:06.139753 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.139759 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:06.139770 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:06.139831 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:06.169906 1302865 cri.go:89] found id: ""
	I1213 14:57:06.169920 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.169927 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:06.169932 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:06.169993 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:06.194468 1302865 cri.go:89] found id: ""
	I1213 14:57:06.194482 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.194492 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:06.194497 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:06.194556 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:06.219346 1302865 cri.go:89] found id: ""
	I1213 14:57:06.219360 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.219367 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:06.219372 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:06.219466 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:06.244844 1302865 cri.go:89] found id: ""
	I1213 14:57:06.244858 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.244865 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:06.244870 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:06.244928 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:06.269412 1302865 cri.go:89] found id: ""
	I1213 14:57:06.269425 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.269433 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:06.269438 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:06.269498 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:06.293947 1302865 cri.go:89] found id: ""
	I1213 14:57:06.293960 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.293967 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:06.293975 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:06.293991 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:06.320232 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:06.320249 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:06.375210 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:06.375229 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:06.392065 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:06.392081 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:06.457910 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:06.449858   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.450478   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.452122   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.452601   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.454099   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:06.449858   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.450478   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.452122   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.452601   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.454099   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:06.457920 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:06.457931 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:09.020376 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:09.030584 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:09.030644 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:09.057441 1302865 cri.go:89] found id: ""
	I1213 14:57:09.057455 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.057462 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:09.057467 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:09.057529 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:09.091252 1302865 cri.go:89] found id: ""
	I1213 14:57:09.091266 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.091273 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:09.091277 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:09.091357 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:09.133954 1302865 cri.go:89] found id: ""
	I1213 14:57:09.133969 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.133976 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:09.133981 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:09.134041 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:09.161351 1302865 cri.go:89] found id: ""
	I1213 14:57:09.161365 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.161372 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:09.161386 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:09.161449 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:09.186493 1302865 cri.go:89] found id: ""
	I1213 14:57:09.186507 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.186515 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:09.186519 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:09.186579 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:09.210752 1302865 cri.go:89] found id: ""
	I1213 14:57:09.210766 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.210774 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:09.210779 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:09.210841 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:09.235216 1302865 cri.go:89] found id: ""
	I1213 14:57:09.235231 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.235238 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:09.235246 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:09.235256 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:09.290010 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:09.290030 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:09.307105 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:09.307122 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:09.373837 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:09.365574   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.366312   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.368046   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.368412   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.369910   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:09.365574   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.366312   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.368046   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.368412   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.369910   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:09.373848 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:09.373862 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:09.435916 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:09.435937 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:11.968947 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:11.978917 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:11.978976 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:12.003367 1302865 cri.go:89] found id: ""
	I1213 14:57:12.003387 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.003395 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:12.003401 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:12.003472 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:12.030862 1302865 cri.go:89] found id: ""
	I1213 14:57:12.030876 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.030883 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:12.030889 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:12.030947 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:12.055991 1302865 cri.go:89] found id: ""
	I1213 14:57:12.056006 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.056014 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:12.056020 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:12.056078 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:12.088685 1302865 cri.go:89] found id: ""
	I1213 14:57:12.088699 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.088706 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:12.088711 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:12.088771 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:12.119175 1302865 cri.go:89] found id: ""
	I1213 14:57:12.119199 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.119206 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:12.119212 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:12.119276 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:12.148170 1302865 cri.go:89] found id: ""
	I1213 14:57:12.148192 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.148199 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:12.148204 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:12.148276 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:12.173907 1302865 cri.go:89] found id: ""
	I1213 14:57:12.173929 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.173936 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:12.173944 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:12.173955 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:12.230024 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:12.230044 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:12.249202 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:12.249219 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:12.317257 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:12.307463   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.308131   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.309930   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.310545   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.312207   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:12.307463   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.308131   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.309930   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.310545   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.312207   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:12.317267 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:12.317284 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:12.384433 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:12.384455 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:14.917091 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:14.927788 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:14.927850 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:14.953190 1302865 cri.go:89] found id: ""
	I1213 14:57:14.953205 1302865 logs.go:282] 0 containers: []
	W1213 14:57:14.953212 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:14.953226 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:14.953289 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:14.978043 1302865 cri.go:89] found id: ""
	I1213 14:57:14.978068 1302865 logs.go:282] 0 containers: []
	W1213 14:57:14.978075 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:14.978081 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:14.978175 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:15.004731 1302865 cri.go:89] found id: ""
	I1213 14:57:15.004749 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.004756 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:15.004761 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:15.004846 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:15.048669 1302865 cri.go:89] found id: ""
	I1213 14:57:15.048685 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.048693 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:15.048698 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:15.048777 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:15.085505 1302865 cri.go:89] found id: ""
	I1213 14:57:15.085520 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.085528 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:15.085534 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:15.085607 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:15.124753 1302865 cri.go:89] found id: ""
	I1213 14:57:15.124776 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.124784 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:15.124790 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:15.124860 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:15.168668 1302865 cri.go:89] found id: ""
	I1213 14:57:15.168682 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.168690 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:15.168698 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:15.168720 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:15.236878 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:15.228546   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.229113   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.230853   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.231344   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.232993   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:15.228546   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.229113   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.230853   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.231344   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.232993   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:15.236889 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:15.236899 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:15.299331 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:15.299361 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:15.331125 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:15.331142 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:15.391451 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:15.391478 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:17.910179 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:17.920514 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:17.920590 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:17.945066 1302865 cri.go:89] found id: ""
	I1213 14:57:17.945081 1302865 logs.go:282] 0 containers: []
	W1213 14:57:17.945088 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:17.945094 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:17.945152 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:17.972856 1302865 cri.go:89] found id: ""
	I1213 14:57:17.972870 1302865 logs.go:282] 0 containers: []
	W1213 14:57:17.972878 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:17.972882 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:17.972944 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:17.999205 1302865 cri.go:89] found id: ""
	I1213 14:57:17.999219 1302865 logs.go:282] 0 containers: []
	W1213 14:57:17.999226 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:17.999231 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:17.999288 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:18.034164 1302865 cri.go:89] found id: ""
	I1213 14:57:18.034178 1302865 logs.go:282] 0 containers: []
	W1213 14:57:18.034185 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:18.034190 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:18.034255 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:18.060346 1302865 cri.go:89] found id: ""
	I1213 14:57:18.060361 1302865 logs.go:282] 0 containers: []
	W1213 14:57:18.060368 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:18.060373 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:18.060438 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:18.089688 1302865 cri.go:89] found id: ""
	I1213 14:57:18.089702 1302865 logs.go:282] 0 containers: []
	W1213 14:57:18.089710 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:18.089718 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:18.089780 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:18.128859 1302865 cri.go:89] found id: ""
	I1213 14:57:18.128874 1302865 logs.go:282] 0 containers: []
	W1213 14:57:18.128881 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:18.128889 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:18.128899 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:18.188820 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:18.188842 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:18.206229 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:18.206247 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:18.277989 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:18.269084   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.269641   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.271489   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.272150   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.273959   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:18.269084   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.269641   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.271489   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.272150   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.273959   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:18.277999 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:18.278009 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:18.339945 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:18.339965 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:20.869114 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:20.879800 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:20.879866 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:20.905760 1302865 cri.go:89] found id: ""
	I1213 14:57:20.905774 1302865 logs.go:282] 0 containers: []
	W1213 14:57:20.905781 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:20.905786 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:20.905849 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:20.931353 1302865 cri.go:89] found id: ""
	I1213 14:57:20.931367 1302865 logs.go:282] 0 containers: []
	W1213 14:57:20.931374 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:20.931379 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:20.931445 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:20.956682 1302865 cri.go:89] found id: ""
	I1213 14:57:20.956696 1302865 logs.go:282] 0 containers: []
	W1213 14:57:20.956704 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:20.956709 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:20.956769 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:20.980824 1302865 cri.go:89] found id: ""
	I1213 14:57:20.980838 1302865 logs.go:282] 0 containers: []
	W1213 14:57:20.980845 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:20.980850 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:20.980909 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:21.008951 1302865 cri.go:89] found id: ""
	I1213 14:57:21.008974 1302865 logs.go:282] 0 containers: []
	W1213 14:57:21.008982 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:21.008987 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:21.009058 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:21.038190 1302865 cri.go:89] found id: ""
	I1213 14:57:21.038204 1302865 logs.go:282] 0 containers: []
	W1213 14:57:21.038211 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:21.038216 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:21.038277 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:21.063608 1302865 cri.go:89] found id: ""
	I1213 14:57:21.063622 1302865 logs.go:282] 0 containers: []
	W1213 14:57:21.063630 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:21.063638 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:21.063648 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:21.132089 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:21.132109 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:21.171889 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:21.171908 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:21.230786 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:21.230806 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:21.247733 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:21.247753 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:21.318785 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:21.309659   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.310476   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.312082   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.312759   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.314434   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:21.309659   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.310476   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.312082   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.312759   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.314434   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:23.819828 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:23.830541 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:23.830604 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:23.853826 1302865 cri.go:89] found id: ""
	I1213 14:57:23.853840 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.853856 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:23.853862 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:23.853933 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:23.879146 1302865 cri.go:89] found id: ""
	I1213 14:57:23.879169 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.879177 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:23.879182 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:23.879253 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:23.904357 1302865 cri.go:89] found id: ""
	I1213 14:57:23.904371 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.904379 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:23.904384 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:23.904450 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:23.929036 1302865 cri.go:89] found id: ""
	I1213 14:57:23.929050 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.929058 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:23.929063 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:23.929124 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:23.954748 1302865 cri.go:89] found id: ""
	I1213 14:57:23.954762 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.954779 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:23.954785 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:23.954854 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:23.979661 1302865 cri.go:89] found id: ""
	I1213 14:57:23.979676 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.979683 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:23.979687 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:23.979750 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:24.009902 1302865 cri.go:89] found id: ""
	I1213 14:57:24.009918 1302865 logs.go:282] 0 containers: []
	W1213 14:57:24.009925 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:24.009935 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:24.009946 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:24.079943 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:24.070877   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.071501   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.073325   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.073881   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.075497   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:24.070877   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.071501   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.073325   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.073881   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.075497   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:24.079954 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:24.079966 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:24.144015 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:24.144037 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:24.174637 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:24.174654 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:24.235392 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:24.235413 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:26.753238 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:26.763339 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:26.763404 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:26.788474 1302865 cri.go:89] found id: ""
	I1213 14:57:26.788487 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.788494 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:26.788499 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:26.788559 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:26.814440 1302865 cri.go:89] found id: ""
	I1213 14:57:26.814454 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.814461 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:26.814466 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:26.814524 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:26.841795 1302865 cri.go:89] found id: ""
	I1213 14:57:26.841809 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.841816 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:26.841821 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:26.841880 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:26.869399 1302865 cri.go:89] found id: ""
	I1213 14:57:26.869413 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.869420 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:26.869425 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:26.869482 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:26.892445 1302865 cri.go:89] found id: ""
	I1213 14:57:26.892459 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.892467 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:26.892472 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:26.892535 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:26.916537 1302865 cri.go:89] found id: ""
	I1213 14:57:26.916558 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.916565 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:26.916570 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:26.916639 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:26.940628 1302865 cri.go:89] found id: ""
	I1213 14:57:26.940650 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.940658 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:26.940671 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:26.940681 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:26.969808 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:26.969827 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:27.025191 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:27.025211 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:27.042465 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:27.042482 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:27.122593 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:27.114680   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.115527   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.117159   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.117448   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.118857   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:27.114680   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.115527   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.117159   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.117448   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.118857   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:27.122618 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:27.122628 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:29.693191 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:29.703585 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:29.703652 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:29.732578 1302865 cri.go:89] found id: ""
	I1213 14:57:29.732593 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.732614 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:29.732621 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:29.732686 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:29.757517 1302865 cri.go:89] found id: ""
	I1213 14:57:29.757531 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.757538 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:29.757543 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:29.757604 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:29.785456 1302865 cri.go:89] found id: ""
	I1213 14:57:29.785470 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.785476 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:29.785482 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:29.785544 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:29.809997 1302865 cri.go:89] found id: ""
	I1213 14:57:29.810011 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.810018 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:29.810023 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:29.810085 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:29.834277 1302865 cri.go:89] found id: ""
	I1213 14:57:29.834292 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.834299 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:29.834304 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:29.834366 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:29.858653 1302865 cri.go:89] found id: ""
	I1213 14:57:29.858667 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.858675 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:29.858686 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:29.858749 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:29.884435 1302865 cri.go:89] found id: ""
	I1213 14:57:29.884450 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.884456 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:29.884464 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:29.884477 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:29.911338 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:29.911356 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:29.966819 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:29.966838 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:29.985125 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:29.985141 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:30.070789 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:30.059761   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.060525   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.063144   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.063902   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.066109   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:30.059761   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.060525   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.063144   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.063902   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.066109   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:30.070800 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:30.070811 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:32.643832 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:32.654329 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:32.654399 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:32.687375 1302865 cri.go:89] found id: ""
	I1213 14:57:32.687390 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.687398 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:32.687403 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:32.687465 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:32.712437 1302865 cri.go:89] found id: ""
	I1213 14:57:32.712452 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.712460 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:32.712465 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:32.712529 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:32.738220 1302865 cri.go:89] found id: ""
	I1213 14:57:32.738234 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.738241 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:32.738247 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:32.738310 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:32.763211 1302865 cri.go:89] found id: ""
	I1213 14:57:32.763226 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.763233 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:32.763238 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:32.763299 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:32.789049 1302865 cri.go:89] found id: ""
	I1213 14:57:32.789063 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.789071 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:32.789077 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:32.789141 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:32.815194 1302865 cri.go:89] found id: ""
	I1213 14:57:32.815208 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.815215 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:32.815221 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:32.815284 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:32.840629 1302865 cri.go:89] found id: ""
	I1213 14:57:32.840646 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.840653 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:32.840661 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:32.840672 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:32.868556 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:32.868574 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:32.923451 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:32.923472 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:32.940492 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:32.940508 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:33.014646 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:33.000281   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.001080   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.003301   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.004000   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.006154   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:33.000281   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.001080   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.003301   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.004000   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.006154   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:33.014656 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:33.014680 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:35.576582 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:35.586876 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:35.586939 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:35.612619 1302865 cri.go:89] found id: ""
	I1213 14:57:35.612634 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.612641 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:35.612646 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:35.612714 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:35.637275 1302865 cri.go:89] found id: ""
	I1213 14:57:35.637289 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.637296 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:35.637302 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:35.637363 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:35.661936 1302865 cri.go:89] found id: ""
	I1213 14:57:35.661950 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.661957 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:35.661962 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:35.662035 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:35.691702 1302865 cri.go:89] found id: ""
	I1213 14:57:35.691716 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.691722 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:35.691727 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:35.691789 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:35.719594 1302865 cri.go:89] found id: ""
	I1213 14:57:35.719608 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.719614 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:35.719619 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:35.719685 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:35.747602 1302865 cri.go:89] found id: ""
	I1213 14:57:35.747617 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.747624 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:35.747629 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:35.747690 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:35.772489 1302865 cri.go:89] found id: ""
	I1213 14:57:35.772503 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.772510 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:35.772517 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:35.772534 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:35.801457 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:35.801474 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:35.859688 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:35.859708 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:35.877069 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:35.877087 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:35.942565 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:35.934502   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.935224   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.936888   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.937197   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.938690   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:35.934502   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.935224   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.936888   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.937197   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.938690   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:35.942576 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:35.942595 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:38.506862 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:38.517509 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:38.517575 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:38.542481 1302865 cri.go:89] found id: ""
	I1213 14:57:38.542496 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.542512 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:38.542517 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:38.542586 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:38.567177 1302865 cri.go:89] found id: ""
	I1213 14:57:38.567191 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.567198 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:38.567202 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:38.567264 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:38.591952 1302865 cri.go:89] found id: ""
	I1213 14:57:38.591967 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.591974 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:38.591979 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:38.592036 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:38.615589 1302865 cri.go:89] found id: ""
	I1213 14:57:38.615604 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.615619 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:38.615625 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:38.615697 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:38.641025 1302865 cri.go:89] found id: ""
	I1213 14:57:38.641039 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.641046 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:38.641051 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:38.641115 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:38.666245 1302865 cri.go:89] found id: ""
	I1213 14:57:38.666259 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.666276 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:38.666282 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:38.666355 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:38.691177 1302865 cri.go:89] found id: ""
	I1213 14:57:38.691191 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.691198 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:38.691206 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:38.691217 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:38.748984 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:38.749004 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:38.765774 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:38.765791 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:38.833656 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:38.825292   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.825956   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.827650   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.828246   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.829830   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:38.825292   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.825956   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.827650   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.828246   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.829830   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:38.833672 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:38.833683 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:38.895503 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:38.895524 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:41.424760 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:41.435082 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:41.435154 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:41.460250 1302865 cri.go:89] found id: ""
	I1213 14:57:41.460265 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.460273 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:41.460278 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:41.460338 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:41.490003 1302865 cri.go:89] found id: ""
	I1213 14:57:41.490017 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.490024 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:41.490029 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:41.490094 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:41.515086 1302865 cri.go:89] found id: ""
	I1213 14:57:41.515100 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.515107 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:41.515112 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:41.515173 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:41.540169 1302865 cri.go:89] found id: ""
	I1213 14:57:41.540183 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.540205 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:41.540211 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:41.540279 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:41.564345 1302865 cri.go:89] found id: ""
	I1213 14:57:41.564358 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.564365 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:41.564370 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:41.564429 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:41.589001 1302865 cri.go:89] found id: ""
	I1213 14:57:41.589015 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.589022 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:41.589027 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:41.589091 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:41.617434 1302865 cri.go:89] found id: ""
	I1213 14:57:41.617447 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.617455 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:41.617462 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:41.617471 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:41.683384 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:41.683411 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:41.711592 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:41.711611 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:41.769286 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:41.769305 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:41.786199 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:41.786219 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:41.854485 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:41.846163   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.846886   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.848432   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.848940   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.850514   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:41.846163   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.846886   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.848432   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.848940   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.850514   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:44.355606 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:44.369969 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:44.370032 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:44.401460 1302865 cri.go:89] found id: ""
	I1213 14:57:44.401474 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.401481 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:44.401486 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:44.401548 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:44.431513 1302865 cri.go:89] found id: ""
	I1213 14:57:44.431527 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.431534 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:44.431539 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:44.431600 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:44.457242 1302865 cri.go:89] found id: ""
	I1213 14:57:44.457256 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.457263 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:44.457268 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:44.457329 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:44.482224 1302865 cri.go:89] found id: ""
	I1213 14:57:44.482238 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.482245 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:44.482250 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:44.482313 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:44.509856 1302865 cri.go:89] found id: ""
	I1213 14:57:44.509871 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.509878 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:44.509884 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:44.509950 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:44.533977 1302865 cri.go:89] found id: ""
	I1213 14:57:44.533992 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.533999 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:44.534005 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:44.534069 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:44.562015 1302865 cri.go:89] found id: ""
	I1213 14:57:44.562029 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.562036 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:44.562044 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:44.562055 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:44.629999 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:44.621407   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.622108   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.623865   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.624500   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.626099   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:44.621407   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.622108   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.623865   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.624500   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.626099   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:44.630009 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:44.630020 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:44.697021 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:44.697042 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:44.725319 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:44.725336 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:44.783033 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:44.783053 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:47.300684 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:47.311369 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:47.311431 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:47.343773 1302865 cri.go:89] found id: ""
	I1213 14:57:47.343787 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.343794 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:47.343800 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:47.343864 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:47.373867 1302865 cri.go:89] found id: ""
	I1213 14:57:47.373881 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.373888 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:47.373893 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:47.373950 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:47.409488 1302865 cri.go:89] found id: ""
	I1213 14:57:47.409503 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.409510 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:47.409515 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:47.409576 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:47.436144 1302865 cri.go:89] found id: ""
	I1213 14:57:47.436160 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.436166 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:47.436172 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:47.436231 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:47.459642 1302865 cri.go:89] found id: ""
	I1213 14:57:47.459656 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.459664 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:47.459669 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:47.459728 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:47.488525 1302865 cri.go:89] found id: ""
	I1213 14:57:47.488539 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.488546 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:47.488589 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:47.488660 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:47.513277 1302865 cri.go:89] found id: ""
	I1213 14:57:47.513304 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.513312 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:47.513320 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:47.513333 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:47.569182 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:47.569201 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:47.586016 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:47.586033 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:47.657399 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:47.648527   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.649441   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.651289   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.651877   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.653400   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:47.648527   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.649441   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.651289   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.651877   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.653400   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:47.657410 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:47.657421 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:47.719756 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:47.719776 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:50.250366 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:50.261360 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:50.261430 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:50.285575 1302865 cri.go:89] found id: ""
	I1213 14:57:50.285588 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.285595 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:50.285601 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:50.285657 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:50.313925 1302865 cri.go:89] found id: ""
	I1213 14:57:50.313939 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.313946 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:50.313951 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:50.314025 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:50.350634 1302865 cri.go:89] found id: ""
	I1213 14:57:50.350653 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.350660 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:50.350665 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:50.350725 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:50.377901 1302865 cri.go:89] found id: ""
	I1213 14:57:50.377915 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.377922 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:50.377927 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:50.377987 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:50.408528 1302865 cri.go:89] found id: ""
	I1213 14:57:50.408550 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.408557 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:50.408562 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:50.408637 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:50.434189 1302865 cri.go:89] found id: ""
	I1213 14:57:50.434203 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.434212 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:50.434217 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:50.434275 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:50.459353 1302865 cri.go:89] found id: ""
	I1213 14:57:50.459367 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.459373 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:50.459381 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:50.459391 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:50.515565 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:50.515585 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:50.532866 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:50.532883 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:50.599094 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:50.590849   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.591681   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.593186   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.593701   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.595193   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:50.590849   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.591681   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.593186   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.593701   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.595193   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:50.599104 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:50.599115 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:50.663140 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:50.663159 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:53.200108 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:53.210621 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:53.210684 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:53.236457 1302865 cri.go:89] found id: ""
	I1213 14:57:53.236471 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.236478 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:53.236483 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:53.236545 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:53.269649 1302865 cri.go:89] found id: ""
	I1213 14:57:53.269664 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.269670 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:53.269677 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:53.269738 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:53.293759 1302865 cri.go:89] found id: ""
	I1213 14:57:53.293774 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.293781 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:53.293786 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:53.293846 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:53.318675 1302865 cri.go:89] found id: ""
	I1213 14:57:53.318690 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.318696 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:53.318701 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:53.318765 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:53.353544 1302865 cri.go:89] found id: ""
	I1213 14:57:53.353558 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.353564 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:53.353569 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:53.353630 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:53.381535 1302865 cri.go:89] found id: ""
	I1213 14:57:53.381549 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.381565 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:53.381571 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:53.381641 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:53.408473 1302865 cri.go:89] found id: ""
	I1213 14:57:53.408487 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.408494 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:53.408502 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:53.408514 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:53.463646 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:53.463670 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:53.480500 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:53.480518 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:53.545969 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:53.538062   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.538490   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.540123   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.540597   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.542043   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:53.538062   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.538490   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.540123   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.540597   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.542043   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:53.545979 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:53.545991 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:53.607729 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:53.607750 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:56.139407 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:56.150264 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:56.150335 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:56.175852 1302865 cri.go:89] found id: ""
	I1213 14:57:56.175866 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.175873 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:56.175878 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:56.175942 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:56.202887 1302865 cri.go:89] found id: ""
	I1213 14:57:56.202901 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.202908 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:56.202921 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:56.202981 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:56.229038 1302865 cri.go:89] found id: ""
	I1213 14:57:56.229053 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.229060 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:56.229065 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:56.229125 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:56.253081 1302865 cri.go:89] found id: ""
	I1213 14:57:56.253096 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.253103 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:56.253108 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:56.253172 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:56.277822 1302865 cri.go:89] found id: ""
	I1213 14:57:56.277836 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.277843 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:56.277849 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:56.277910 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:56.302419 1302865 cri.go:89] found id: ""
	I1213 14:57:56.302435 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.302442 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:56.302447 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:56.302508 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:56.327036 1302865 cri.go:89] found id: ""
	I1213 14:57:56.327050 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.327057 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:56.327066 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:56.327078 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:56.353968 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:56.353986 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:56.426915 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:56.418628   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.419173   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.420794   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.421363   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.422912   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:56.418628   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.419173   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.420794   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.421363   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.422912   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:56.426926 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:56.426943 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:56.488491 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:56.488513 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:56.516737 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:56.516753 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:59.077330 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:59.087745 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:59.087809 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:59.113689 1302865 cri.go:89] found id: ""
	I1213 14:57:59.113703 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.113710 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:59.113715 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:59.113774 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:59.138884 1302865 cri.go:89] found id: ""
	I1213 14:57:59.138898 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.138905 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:59.138911 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:59.138976 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:59.164226 1302865 cri.go:89] found id: ""
	I1213 14:57:59.164240 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.164246 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:59.164254 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:59.164312 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:59.189753 1302865 cri.go:89] found id: ""
	I1213 14:57:59.189767 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.189774 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:59.189779 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:59.189840 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:59.219066 1302865 cri.go:89] found id: ""
	I1213 14:57:59.219080 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.219086 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:59.219092 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:59.219152 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:59.243456 1302865 cri.go:89] found id: ""
	I1213 14:57:59.243470 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.243477 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:59.243482 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:59.243544 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:59.267676 1302865 cri.go:89] found id: ""
	I1213 14:57:59.267692 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.267699 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:59.267707 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:59.267719 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:59.284600 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:59.284617 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:59.356184 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:59.346354   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.347267   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.349163   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.349876   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.351596   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:59.346354   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.347267   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.349163   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.349876   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.351596   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:59.356202 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:59.356215 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:59.427513 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:59.427535 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:59.459203 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:59.459220 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:02.016233 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:02.027182 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:02.027246 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:02.053453 1302865 cri.go:89] found id: ""
	I1213 14:58:02.053467 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.053475 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:02.053480 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:02.053543 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:02.081288 1302865 cri.go:89] found id: ""
	I1213 14:58:02.081303 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.081310 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:02.081315 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:02.081377 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:02.106556 1302865 cri.go:89] found id: ""
	I1213 14:58:02.106572 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.106579 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:02.106585 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:02.106645 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:02.131201 1302865 cri.go:89] found id: ""
	I1213 14:58:02.131215 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.131221 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:02.131226 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:02.131286 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:02.156170 1302865 cri.go:89] found id: ""
	I1213 14:58:02.156194 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.156202 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:02.156207 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:02.156275 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:02.185059 1302865 cri.go:89] found id: ""
	I1213 14:58:02.185073 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.185080 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:02.185086 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:02.185153 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:02.209854 1302865 cri.go:89] found id: ""
	I1213 14:58:02.209870 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.209884 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:02.209893 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:02.209903 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:02.279934 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:02.270936   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.271416   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.273300   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.273902   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.275651   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:02.270936   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.271416   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.273300   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.273902   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.275651   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:02.279958 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:02.279970 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:02.341869 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:02.341888 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:02.370761 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:02.370783 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:02.431851 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:02.431869 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:04.950137 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:04.960995 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:04.961059 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:04.986243 1302865 cri.go:89] found id: ""
	I1213 14:58:04.986257 1302865 logs.go:282] 0 containers: []
	W1213 14:58:04.986264 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:04.986269 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:04.986329 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:05.016170 1302865 cri.go:89] found id: ""
	I1213 14:58:05.016192 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.016200 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:05.016206 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:05.016270 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:05.042103 1302865 cri.go:89] found id: ""
	I1213 14:58:05.042117 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.042124 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:05.042129 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:05.042188 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:05.066050 1302865 cri.go:89] found id: ""
	I1213 14:58:05.066065 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.066071 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:05.066077 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:05.066141 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:05.091600 1302865 cri.go:89] found id: ""
	I1213 14:58:05.091615 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.091623 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:05.091634 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:05.091698 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:05.117406 1302865 cri.go:89] found id: ""
	I1213 14:58:05.117420 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.117427 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:05.117432 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:05.117491 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:05.143774 1302865 cri.go:89] found id: ""
	I1213 14:58:05.143788 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.143794 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:05.143802 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:05.143823 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:05.198717 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:05.198736 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:05.216110 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:05.216127 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:05.281771 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:05.273944   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.274471   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.276069   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.276389   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.277907   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:05.273944   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.274471   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.276069   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.276389   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.277907   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:05.281792 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:05.281804 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:05.344051 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:05.344070 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:07.872032 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:07.883862 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:07.883925 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:07.908603 1302865 cri.go:89] found id: ""
	I1213 14:58:07.908616 1302865 logs.go:282] 0 containers: []
	W1213 14:58:07.908623 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:07.908628 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:07.908696 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:07.932609 1302865 cri.go:89] found id: ""
	I1213 14:58:07.932624 1302865 logs.go:282] 0 containers: []
	W1213 14:58:07.932631 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:07.932636 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:07.932729 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:07.957476 1302865 cri.go:89] found id: ""
	I1213 14:58:07.957490 1302865 logs.go:282] 0 containers: []
	W1213 14:58:07.957497 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:07.957502 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:07.957561 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:07.983994 1302865 cri.go:89] found id: ""
	I1213 14:58:07.984014 1302865 logs.go:282] 0 containers: []
	W1213 14:58:07.984022 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:07.984027 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:07.984090 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:08.016758 1302865 cri.go:89] found id: ""
	I1213 14:58:08.016772 1302865 logs.go:282] 0 containers: []
	W1213 14:58:08.016779 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:08.016784 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:08.016850 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:08.048311 1302865 cri.go:89] found id: ""
	I1213 14:58:08.048326 1302865 logs.go:282] 0 containers: []
	W1213 14:58:08.048333 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:08.048338 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:08.048404 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:08.074196 1302865 cri.go:89] found id: ""
	I1213 14:58:08.074211 1302865 logs.go:282] 0 containers: []
	W1213 14:58:08.074219 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:08.074226 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:08.074237 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:08.139046 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:08.139073 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:08.167121 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:08.167141 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:08.222634 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:08.222664 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:08.240309 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:08.240325 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:08.310479 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:08.301605   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.302332   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.304082   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.304629   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.306246   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:08.301605   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.302332   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.304082   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.304629   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.306246   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:10.810723 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:10.820844 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:10.820953 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:10.865862 1302865 cri.go:89] found id: ""
	I1213 14:58:10.865875 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.865882 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:10.865888 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:10.865959 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:10.896607 1302865 cri.go:89] found id: ""
	I1213 14:58:10.896621 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.896628 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:10.896634 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:10.896710 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:10.924657 1302865 cri.go:89] found id: ""
	I1213 14:58:10.924671 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.924678 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:10.924684 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:10.924748 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:10.949300 1302865 cri.go:89] found id: ""
	I1213 14:58:10.949314 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.949321 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:10.949326 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:10.949388 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:10.973896 1302865 cri.go:89] found id: ""
	I1213 14:58:10.973910 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.973917 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:10.973922 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:10.973983 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:10.998200 1302865 cri.go:89] found id: ""
	I1213 14:58:10.998214 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.998231 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:10.998237 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:10.998295 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:11.024841 1302865 cri.go:89] found id: ""
	I1213 14:58:11.024856 1302865 logs.go:282] 0 containers: []
	W1213 14:58:11.024863 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:11.024871 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:11.024886 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:11.092350 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:11.083613   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.084237   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.086051   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.086531   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.088132   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:11.083613   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.084237   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.086051   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.086531   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.088132   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:11.092361 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:11.092372 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:11.154591 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:11.154612 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:11.187883 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:11.187899 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:11.248594 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:11.248613 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:13.766160 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:13.776057 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:13.776115 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:13.800863 1302865 cri.go:89] found id: ""
	I1213 14:58:13.800877 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.800884 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:13.800889 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:13.800990 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:13.825283 1302865 cri.go:89] found id: ""
	I1213 14:58:13.825298 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.825305 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:13.825309 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:13.825368 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:13.857732 1302865 cri.go:89] found id: ""
	I1213 14:58:13.857746 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.857753 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:13.857758 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:13.857816 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:13.891546 1302865 cri.go:89] found id: ""
	I1213 14:58:13.891560 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.891566 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:13.891572 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:13.891629 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:13.918725 1302865 cri.go:89] found id: ""
	I1213 14:58:13.918738 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.918746 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:13.918750 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:13.918810 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:13.942434 1302865 cri.go:89] found id: ""
	I1213 14:58:13.942448 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.942455 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:13.942460 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:13.942521 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:13.966591 1302865 cri.go:89] found id: ""
	I1213 14:58:13.966606 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.966613 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:13.966621 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:13.966632 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:13.983200 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:13.983217 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:14.050601 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:14.041722   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.042310   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.044028   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.044598   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.046179   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:14.041722   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.042310   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.044028   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.044598   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.046179   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:14.050610 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:14.050622 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:14.111742 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:14.111761 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:14.139171 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:14.139189 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:16.694504 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:16.704690 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:16.704753 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:16.730421 1302865 cri.go:89] found id: ""
	I1213 14:58:16.730436 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.730444 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:16.730449 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:16.730510 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:16.755642 1302865 cri.go:89] found id: ""
	I1213 14:58:16.755657 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.755676 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:16.755681 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:16.755741 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:16.780583 1302865 cri.go:89] found id: ""
	I1213 14:58:16.780597 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.780604 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:16.780610 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:16.780685 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:16.809520 1302865 cri.go:89] found id: ""
	I1213 14:58:16.809534 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.809542 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:16.809547 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:16.809606 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:16.845772 1302865 cri.go:89] found id: ""
	I1213 14:58:16.845787 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.845794 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:16.845799 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:16.845867 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:16.871303 1302865 cri.go:89] found id: ""
	I1213 14:58:16.871338 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.871345 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:16.871350 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:16.871411 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:16.897846 1302865 cri.go:89] found id: ""
	I1213 14:58:16.897859 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.897866 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:16.897875 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:16.897885 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:16.959059 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:16.959079 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:16.996406 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:16.996421 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:17.052568 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:17.052589 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:17.069678 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:17.069696 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:17.133677 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:17.125422   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.125964   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.127591   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.128083   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.129662   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:17.125422   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.125964   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.127591   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.128083   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.129662   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:19.633920 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:19.644044 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:19.644109 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:19.668667 1302865 cri.go:89] found id: ""
	I1213 14:58:19.668681 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.668688 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:19.668693 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:19.668759 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:19.693045 1302865 cri.go:89] found id: ""
	I1213 14:58:19.693059 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.693066 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:19.693071 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:19.693134 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:19.717622 1302865 cri.go:89] found id: ""
	I1213 14:58:19.717637 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.717643 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:19.717649 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:19.717708 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:19.742933 1302865 cri.go:89] found id: ""
	I1213 14:58:19.742948 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.742954 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:19.742962 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:19.743024 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:19.767055 1302865 cri.go:89] found id: ""
	I1213 14:58:19.767069 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.767076 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:19.767081 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:19.767139 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:19.793086 1302865 cri.go:89] found id: ""
	I1213 14:58:19.793100 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.793107 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:19.793112 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:19.793172 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:19.816884 1302865 cri.go:89] found id: ""
	I1213 14:58:19.816898 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.816905 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:19.816912 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:19.816927 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:19.833746 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:19.833763 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:19.912181 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:19.904591   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.905016   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.906518   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.906824   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.908282   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:19.904591   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.905016   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.906518   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.906824   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.908282   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:19.912191 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:19.912202 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:19.973611 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:19.973631 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:20.005249 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:20.005269 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:22.571015 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:22.581487 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:22.581553 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:22.606385 1302865 cri.go:89] found id: ""
	I1213 14:58:22.606399 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.606405 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:22.606411 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:22.606466 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:22.631290 1302865 cri.go:89] found id: ""
	I1213 14:58:22.631304 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.631330 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:22.631341 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:22.631402 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:22.656039 1302865 cri.go:89] found id: ""
	I1213 14:58:22.656053 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.656059 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:22.656064 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:22.656123 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:22.680255 1302865 cri.go:89] found id: ""
	I1213 14:58:22.680268 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.680275 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:22.680281 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:22.680339 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:22.705412 1302865 cri.go:89] found id: ""
	I1213 14:58:22.705426 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.705434 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:22.705439 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:22.705501 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:22.729869 1302865 cri.go:89] found id: ""
	I1213 14:58:22.729885 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.729891 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:22.729897 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:22.729961 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:22.757980 1302865 cri.go:89] found id: ""
	I1213 14:58:22.757994 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.758001 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:22.758009 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:22.758022 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:22.774416 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:22.774433 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:22.850017 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:22.837138   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.837894   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.843564   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.844183   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.845796   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:22.837138   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.837894   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.843564   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.844183   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.845796   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:22.850034 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:22.850045 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:22.916305 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:22.916327 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:22.946422 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:22.946438 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:25.504766 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:25.515062 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:25.515129 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:25.539801 1302865 cri.go:89] found id: ""
	I1213 14:58:25.539815 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.539822 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:25.539827 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:25.539888 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:25.564134 1302865 cri.go:89] found id: ""
	I1213 14:58:25.564148 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.564155 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:25.564159 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:25.564218 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:25.588150 1302865 cri.go:89] found id: ""
	I1213 14:58:25.588165 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.588173 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:25.588178 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:25.588239 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:25.613567 1302865 cri.go:89] found id: ""
	I1213 14:58:25.613581 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.613588 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:25.613593 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:25.613659 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:25.643274 1302865 cri.go:89] found id: ""
	I1213 14:58:25.643290 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.643297 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:25.643303 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:25.643388 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:25.668136 1302865 cri.go:89] found id: ""
	I1213 14:58:25.668150 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.668157 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:25.668162 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:25.668223 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:25.693114 1302865 cri.go:89] found id: ""
	I1213 14:58:25.693128 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.693135 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:25.693143 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:25.693152 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:25.751087 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:25.751106 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:25.768578 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:25.768598 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:25.842306 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:25.826336   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.827040   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.829047   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.830610   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.833626   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:25.826336   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.827040   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.829047   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.830610   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.833626   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:25.842315 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:25.842325 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:25.934744 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:25.934771 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:28.468857 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:28.479478 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:28.479543 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:28.509273 1302865 cri.go:89] found id: ""
	I1213 14:58:28.509286 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.509293 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:28.509299 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:28.509360 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:28.535574 1302865 cri.go:89] found id: ""
	I1213 14:58:28.535588 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.535595 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:28.535601 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:28.535660 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:28.561231 1302865 cri.go:89] found id: ""
	I1213 14:58:28.561244 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.561251 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:28.561256 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:28.561316 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:28.586867 1302865 cri.go:89] found id: ""
	I1213 14:58:28.586881 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.586897 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:28.586903 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:28.586971 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:28.613781 1302865 cri.go:89] found id: ""
	I1213 14:58:28.613795 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.613802 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:28.613807 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:28.613865 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:28.639226 1302865 cri.go:89] found id: ""
	I1213 14:58:28.639247 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.639255 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:28.639260 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:28.639351 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:28.664957 1302865 cri.go:89] found id: ""
	I1213 14:58:28.664971 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.664977 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:28.664985 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:28.664995 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:28.681545 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:28.681562 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:28.746274 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:28.738221   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.738895   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.740429   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.740922   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.742410   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:28.738221   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.738895   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.740429   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.740922   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.742410   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:28.746286 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:28.746297 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:28.811866 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:28.811886 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:28.853916 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:28.853932 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:31.417796 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:31.427841 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:31.427906 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:31.454876 1302865 cri.go:89] found id: ""
	I1213 14:58:31.454890 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.454897 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:31.454903 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:31.454967 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:31.478745 1302865 cri.go:89] found id: ""
	I1213 14:58:31.478763 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.478770 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:31.478774 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:31.478834 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:31.504045 1302865 cri.go:89] found id: ""
	I1213 14:58:31.504059 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.504066 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:31.504071 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:31.504132 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:31.536667 1302865 cri.go:89] found id: ""
	I1213 14:58:31.536687 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.536694 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:31.536699 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:31.536759 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:31.561651 1302865 cri.go:89] found id: ""
	I1213 14:58:31.561665 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.561672 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:31.561679 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:31.561740 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:31.590467 1302865 cri.go:89] found id: ""
	I1213 14:58:31.590487 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.590494 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:31.590499 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:31.590572 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:31.621443 1302865 cri.go:89] found id: ""
	I1213 14:58:31.621457 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.621467 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:31.621475 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:31.621485 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:31.689190 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:31.680703   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.681594   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.682366   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.683913   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.684339   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:31.680703   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.681594   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.682366   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.683913   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.684339   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:31.689199 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:31.689210 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:31.750918 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:31.750940 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:31.777989 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:31.778007 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:31.837415 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:31.837438 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:34.355220 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:34.365583 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:34.365646 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:34.390861 1302865 cri.go:89] found id: ""
	I1213 14:58:34.390875 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.390882 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:34.390887 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:34.390945 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:34.419452 1302865 cri.go:89] found id: ""
	I1213 14:58:34.419466 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.419473 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:34.419478 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:34.419540 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:34.444048 1302865 cri.go:89] found id: ""
	I1213 14:58:34.444062 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.444069 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:34.444073 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:34.444135 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:34.472603 1302865 cri.go:89] found id: ""
	I1213 14:58:34.472617 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.472623 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:34.472629 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:34.472693 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:34.496330 1302865 cri.go:89] found id: ""
	I1213 14:58:34.496344 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.496351 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:34.496356 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:34.496415 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:34.521267 1302865 cri.go:89] found id: ""
	I1213 14:58:34.521281 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.521288 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:34.521294 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:34.521355 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:34.545219 1302865 cri.go:89] found id: ""
	I1213 14:58:34.545234 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.545241 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:34.545248 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:34.545263 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:34.611331 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:34.602304   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.603074   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.604885   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.605533   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.607098   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:34.602304   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.603074   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.604885   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.605533   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.607098   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:34.611342 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:34.611352 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:34.674005 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:34.674023 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:34.701768 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:34.701784 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:34.760313 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:34.760332 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:37.279813 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:37.289901 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:37.289961 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:37.314082 1302865 cri.go:89] found id: ""
	I1213 14:58:37.314097 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.314103 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:37.314115 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:37.314174 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:37.349456 1302865 cri.go:89] found id: ""
	I1213 14:58:37.349470 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.349477 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:37.349482 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:37.349540 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:37.376791 1302865 cri.go:89] found id: ""
	I1213 14:58:37.376805 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.376812 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:37.376817 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:37.376877 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:37.400702 1302865 cri.go:89] found id: ""
	I1213 14:58:37.400717 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.400724 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:37.400730 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:37.400792 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:37.424348 1302865 cri.go:89] found id: ""
	I1213 14:58:37.424363 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.424370 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:37.424375 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:37.424435 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:37.449182 1302865 cri.go:89] found id: ""
	I1213 14:58:37.449197 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.449204 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:37.449209 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:37.449270 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:37.476252 1302865 cri.go:89] found id: ""
	I1213 14:58:37.476266 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.476273 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:37.476280 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:37.476294 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:37.534602 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:37.534621 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:37.552019 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:37.552037 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:37.614270 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:37.605713   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.606478   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.608056   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.608699   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.610244   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:37.605713   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.606478   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.608056   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.608699   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.610244   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:37.614281 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:37.614292 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:37.676894 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:37.676913 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:40.209558 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:40.220003 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:40.220065 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:40.246553 1302865 cri.go:89] found id: ""
	I1213 14:58:40.246567 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.246574 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:40.246579 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:40.246642 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:40.270663 1302865 cri.go:89] found id: ""
	I1213 14:58:40.270677 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.270684 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:40.270689 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:40.270750 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:40.296263 1302865 cri.go:89] found id: ""
	I1213 14:58:40.296278 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.296285 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:40.296292 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:40.296352 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:40.320181 1302865 cri.go:89] found id: ""
	I1213 14:58:40.320195 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.320204 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:40.320208 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:40.320268 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:40.345140 1302865 cri.go:89] found id: ""
	I1213 14:58:40.345155 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.345162 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:40.345167 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:40.345236 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:40.368989 1302865 cri.go:89] found id: ""
	I1213 14:58:40.369003 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.369010 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:40.369015 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:40.369075 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:40.393631 1302865 cri.go:89] found id: ""
	I1213 14:58:40.393646 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.393653 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:40.393661 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:40.393672 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:40.421318 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:40.421334 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:40.480359 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:40.480379 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:40.497525 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:40.497544 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:40.565603 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:40.557124   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.557721   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.559295   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.559804   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.561673   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:40.557124   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.557721   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.559295   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.559804   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.561673   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:40.565614 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:40.565625 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:43.127433 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:43.141684 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:43.141744 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:43.166921 1302865 cri.go:89] found id: ""
	I1213 14:58:43.166935 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.166942 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:43.166947 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:43.167010 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:43.191796 1302865 cri.go:89] found id: ""
	I1213 14:58:43.191810 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.191817 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:43.191823 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:43.191883 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:43.220968 1302865 cri.go:89] found id: ""
	I1213 14:58:43.220982 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.220988 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:43.220993 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:43.221050 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:43.249138 1302865 cri.go:89] found id: ""
	I1213 14:58:43.249153 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.249160 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:43.249166 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:43.249226 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:43.273972 1302865 cri.go:89] found id: ""
	I1213 14:58:43.273986 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.273993 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:43.273998 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:43.274056 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:43.298424 1302865 cri.go:89] found id: ""
	I1213 14:58:43.298439 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.298446 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:43.298451 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:43.298523 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:43.326886 1302865 cri.go:89] found id: ""
	I1213 14:58:43.326900 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.326907 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:43.326915 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:43.326925 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:43.383183 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:43.383202 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:43.401545 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:43.401564 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:43.472321 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:43.463674   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.464321   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.466101   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.466707   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.468411   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:43.463674   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.464321   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.466101   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.466707   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.468411   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:43.472331 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:43.472347 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:43.535483 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:43.535504 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:46.069443 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:46.079671 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:46.079735 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:46.112232 1302865 cri.go:89] found id: ""
	I1213 14:58:46.112246 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.112263 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:46.112268 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:46.112334 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:46.143946 1302865 cri.go:89] found id: ""
	I1213 14:58:46.143960 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.143968 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:46.143973 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:46.144034 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:46.172869 1302865 cri.go:89] found id: ""
	I1213 14:58:46.172893 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.172901 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:46.172906 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:46.172969 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:46.198118 1302865 cri.go:89] found id: ""
	I1213 14:58:46.198132 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.198139 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:46.198144 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:46.198210 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:46.226657 1302865 cri.go:89] found id: ""
	I1213 14:58:46.226672 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.226679 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:46.226689 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:46.226750 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:46.250158 1302865 cri.go:89] found id: ""
	I1213 14:58:46.250183 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.250190 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:46.250199 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:46.250268 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:46.275259 1302865 cri.go:89] found id: ""
	I1213 14:58:46.275274 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.275281 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:46.275303 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:46.275335 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:46.349416 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:46.340779   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.341521   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.343041   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.343652   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.345289   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:46.340779   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.341521   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.343041   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.343652   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.345289   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:46.349427 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:46.349440 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:46.412854 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:46.412874 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:46.443625 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:46.443641 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:46.501088 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:46.501108 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:49.018999 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:49.029334 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:49.029404 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:49.054853 1302865 cri.go:89] found id: ""
	I1213 14:58:49.054867 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.054874 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:49.054879 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:49.054941 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:49.081166 1302865 cri.go:89] found id: ""
	I1213 14:58:49.081185 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.081193 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:49.081198 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:49.081261 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:49.109404 1302865 cri.go:89] found id: ""
	I1213 14:58:49.109418 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.109425 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:49.109430 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:49.109493 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:49.136643 1302865 cri.go:89] found id: ""
	I1213 14:58:49.136658 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.136665 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:49.136670 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:49.136741 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:49.165751 1302865 cri.go:89] found id: ""
	I1213 14:58:49.165765 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.165772 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:49.165777 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:49.165837 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:49.193225 1302865 cri.go:89] found id: ""
	I1213 14:58:49.193239 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.193246 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:49.193252 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:49.193314 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:49.221440 1302865 cri.go:89] found id: ""
	I1213 14:58:49.221455 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.221462 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:49.221470 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:49.221480 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:49.277216 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:49.277234 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:49.293907 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:49.293927 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:49.356075 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:49.348073   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.348458   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.350225   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.350571   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.352111   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:49.348073   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.348458   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.350225   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.350571   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.352111   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:49.356085 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:49.356095 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:49.418015 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:49.418034 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:51.951013 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:51.961457 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:51.961522 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:51.988624 1302865 cri.go:89] found id: ""
	I1213 14:58:51.988638 1302865 logs.go:282] 0 containers: []
	W1213 14:58:51.988645 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:51.988650 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:51.988725 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:52.015499 1302865 cri.go:89] found id: ""
	I1213 14:58:52.015513 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.015520 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:52.015526 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:52.015589 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:52.041762 1302865 cri.go:89] found id: ""
	I1213 14:58:52.041777 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.041784 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:52.041789 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:52.041850 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:52.068323 1302865 cri.go:89] found id: ""
	I1213 14:58:52.068338 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.068345 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:52.068350 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:52.068415 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:52.106065 1302865 cri.go:89] found id: ""
	I1213 14:58:52.106079 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.106086 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:52.106091 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:52.106160 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:52.140252 1302865 cri.go:89] found id: ""
	I1213 14:58:52.140272 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.140279 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:52.140284 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:52.140343 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:52.167100 1302865 cri.go:89] found id: ""
	I1213 14:58:52.167113 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.167120 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:52.167128 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:52.167138 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:52.226191 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:52.226210 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:52.243667 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:52.243683 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:52.311033 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:52.302537   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.303096   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.304747   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.305085   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.306664   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:52.302537   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.303096   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.304747   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.305085   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.306664   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:52.311046 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:52.311057 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:52.372679 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:52.372703 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:54.903108 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:54.913373 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:54.913436 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:54.938658 1302865 cri.go:89] found id: ""
	I1213 14:58:54.938673 1302865 logs.go:282] 0 containers: []
	W1213 14:58:54.938680 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:54.938686 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:54.938753 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:54.962838 1302865 cri.go:89] found id: ""
	I1213 14:58:54.962851 1302865 logs.go:282] 0 containers: []
	W1213 14:58:54.962866 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:54.962871 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:54.962942 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:54.988758 1302865 cri.go:89] found id: ""
	I1213 14:58:54.988773 1302865 logs.go:282] 0 containers: []
	W1213 14:58:54.988780 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:54.988785 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:54.988855 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:55.021177 1302865 cri.go:89] found id: ""
	I1213 14:58:55.021192 1302865 logs.go:282] 0 containers: []
	W1213 14:58:55.021200 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:55.021206 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:55.021272 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:55.049330 1302865 cri.go:89] found id: ""
	I1213 14:58:55.049344 1302865 logs.go:282] 0 containers: []
	W1213 14:58:55.049356 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:55.049361 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:55.049421 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:55.079835 1302865 cri.go:89] found id: ""
	I1213 14:58:55.079849 1302865 logs.go:282] 0 containers: []
	W1213 14:58:55.079856 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:55.079861 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:55.079920 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:55.107073 1302865 cri.go:89] found id: ""
	I1213 14:58:55.107087 1302865 logs.go:282] 0 containers: []
	W1213 14:58:55.107094 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:55.107102 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:55.107112 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:55.165853 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:55.165871 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:55.183109 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:55.183127 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:55.251642 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:55.242691   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.243211   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.244929   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.245470   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.247181   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:55.242691   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.243211   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.244929   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.245470   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.247181   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:55.251652 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:55.251664 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:55.317380 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:55.317399 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:57.847271 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:57.857537 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:57.857603 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:57.882391 1302865 cri.go:89] found id: ""
	I1213 14:58:57.882405 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.882412 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:57.882417 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:57.882490 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:57.905909 1302865 cri.go:89] found id: ""
	I1213 14:58:57.905923 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.905943 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:57.905948 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:57.906018 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:57.930237 1302865 cri.go:89] found id: ""
	I1213 14:58:57.930252 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.930259 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:57.930264 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:57.930337 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:57.958985 1302865 cri.go:89] found id: ""
	I1213 14:58:57.959014 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.959020 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:57.959031 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:57.959099 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:57.983693 1302865 cri.go:89] found id: ""
	I1213 14:58:57.983707 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.983714 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:57.983719 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:57.983779 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:58.012155 1302865 cri.go:89] found id: ""
	I1213 14:58:58.012170 1302865 logs.go:282] 0 containers: []
	W1213 14:58:58.012178 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:58.012183 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:58.012250 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:58.043700 1302865 cri.go:89] found id: ""
	I1213 14:58:58.043714 1302865 logs.go:282] 0 containers: []
	W1213 14:58:58.043722 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:58.043730 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:58.043742 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:58.105070 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:58.105098 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:58.123698 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:58.123717 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:58.194632 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:58.186276   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.187012   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.188759   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.189247   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.190768   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:58.186276   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.187012   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.188759   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.189247   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.190768   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:58.194642 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:58.194653 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:58.256210 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:58.256230 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:00.787680 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:00.798261 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:00.798326 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:00.826895 1302865 cri.go:89] found id: ""
	I1213 14:59:00.826908 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.826915 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:00.826921 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:00.826980 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:00.851410 1302865 cri.go:89] found id: ""
	I1213 14:59:00.851424 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.851431 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:00.851437 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:00.851510 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:00.876891 1302865 cri.go:89] found id: ""
	I1213 14:59:00.876906 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.876912 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:00.876917 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:00.876975 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:00.900564 1302865 cri.go:89] found id: ""
	I1213 14:59:00.900578 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.900585 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:00.900589 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:00.900647 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:00.925560 1302865 cri.go:89] found id: ""
	I1213 14:59:00.925574 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.925581 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:00.925586 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:00.925647 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:00.954298 1302865 cri.go:89] found id: ""
	I1213 14:59:00.954311 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.954319 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:00.954330 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:00.954388 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:00.980684 1302865 cri.go:89] found id: ""
	I1213 14:59:00.980698 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.980704 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:00.980718 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:00.980731 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:01.048024 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:01.039594   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.040152   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.041748   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.042294   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.043863   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:01.039594   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.040152   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.041748   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.042294   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.043863   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:01.048033 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:01.048044 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:01.110723 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:01.110742 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:01.144966 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:01.144983 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:01.203272 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:01.203301 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:03.722770 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:03.733112 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:03.733170 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:03.761042 1302865 cri.go:89] found id: ""
	I1213 14:59:03.761057 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.761064 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:03.761069 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:03.761130 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:03.789429 1302865 cri.go:89] found id: ""
	I1213 14:59:03.789443 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.789450 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:03.789455 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:03.789521 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:03.816916 1302865 cri.go:89] found id: ""
	I1213 14:59:03.816930 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.816937 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:03.816942 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:03.817001 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:03.844301 1302865 cri.go:89] found id: ""
	I1213 14:59:03.844317 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.844324 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:03.844329 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:03.844388 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:03.873060 1302865 cri.go:89] found id: ""
	I1213 14:59:03.873075 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.873082 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:03.873087 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:03.873147 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:03.910513 1302865 cri.go:89] found id: ""
	I1213 14:59:03.910527 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.910534 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:03.910539 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:03.910601 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:03.938039 1302865 cri.go:89] found id: ""
	I1213 14:59:03.938053 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.938060 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:03.938067 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:03.938077 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:03.993458 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:03.993478 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:04.011140 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:04.011157 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:04.078339 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:04.069502   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.070272   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.072094   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.072604   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.074176   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:04.069502   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.070272   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.072094   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.072604   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.074176   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:04.078350 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:04.078361 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:04.142915 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:04.142934 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:06.673444 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:06.683643 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:06.683703 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:06.708707 1302865 cri.go:89] found id: ""
	I1213 14:59:06.708727 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.708734 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:06.708739 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:06.708799 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:06.734465 1302865 cri.go:89] found id: ""
	I1213 14:59:06.734479 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.734486 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:06.734495 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:06.734584 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:06.759590 1302865 cri.go:89] found id: ""
	I1213 14:59:06.759603 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.759610 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:06.759615 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:06.759674 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:06.785693 1302865 cri.go:89] found id: ""
	I1213 14:59:06.785706 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.785713 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:06.785720 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:06.785777 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:06.810125 1302865 cri.go:89] found id: ""
	I1213 14:59:06.810139 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.810146 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:06.810151 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:06.810215 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:06.835783 1302865 cri.go:89] found id: ""
	I1213 14:59:06.835797 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.835804 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:06.835809 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:06.835869 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:06.860909 1302865 cri.go:89] found id: ""
	I1213 14:59:06.860922 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.860929 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:06.860936 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:06.860946 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:06.916027 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:06.916047 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:06.933118 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:06.933135 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:06.997759 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:06.989536   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.990242   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.991821   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.992353   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.993901   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:06.989536   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.990242   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.991821   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.992353   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.993901   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:06.997769 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:06.997779 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:07.059939 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:07.059961 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:09.591076 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:09.601913 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:09.601975 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:09.626204 1302865 cri.go:89] found id: ""
	I1213 14:59:09.626218 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.626225 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:09.626230 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:09.626289 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:09.653443 1302865 cri.go:89] found id: ""
	I1213 14:59:09.653457 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.653463 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:09.653469 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:09.653531 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:09.678836 1302865 cri.go:89] found id: ""
	I1213 14:59:09.678851 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.678858 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:09.678865 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:09.678924 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:09.704492 1302865 cri.go:89] found id: ""
	I1213 14:59:09.704506 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.704514 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:09.704519 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:09.704581 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:09.733333 1302865 cri.go:89] found id: ""
	I1213 14:59:09.733355 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.733363 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:09.733368 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:09.733431 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:09.758847 1302865 cri.go:89] found id: ""
	I1213 14:59:09.758861 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.758869 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:09.758874 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:09.758946 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:09.785932 1302865 cri.go:89] found id: ""
	I1213 14:59:09.785946 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.785953 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:09.785962 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:09.785973 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:09.842054 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:09.842073 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:09.859249 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:09.859273 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:09.924527 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:09.916673   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.917242   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.918722   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.919219   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.920662   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:09.916673   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.917242   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.918722   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.919219   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.920662   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:09.924536 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:09.924546 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:09.987531 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:09.987550 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:12.517373 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:12.529230 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:12.529292 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:12.558354 1302865 cri.go:89] found id: ""
	I1213 14:59:12.558368 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.558375 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:12.558380 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:12.558439 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:12.585312 1302865 cri.go:89] found id: ""
	I1213 14:59:12.585326 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.585333 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:12.585338 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:12.585396 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:12.613481 1302865 cri.go:89] found id: ""
	I1213 14:59:12.613494 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.613501 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:12.613506 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:12.613564 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:12.636592 1302865 cri.go:89] found id: ""
	I1213 14:59:12.636614 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.636621 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:12.636627 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:12.636694 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:12.660499 1302865 cri.go:89] found id: ""
	I1213 14:59:12.660513 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.660520 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:12.660524 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:12.660591 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:12.684274 1302865 cri.go:89] found id: ""
	I1213 14:59:12.684297 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.684304 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:12.684309 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:12.684377 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:12.715959 1302865 cri.go:89] found id: ""
	I1213 14:59:12.715973 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.715980 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:12.715992 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:12.716003 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:12.779780 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:12.771561   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.772194   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.773845   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.774330   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.775929   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:12.771561   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.772194   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.773845   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.774330   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.775929   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:12.779790 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:12.779801 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:12.840858 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:12.840877 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:12.870238 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:12.870256 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:12.930596 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:12.930615 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:15.449328 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:15.460194 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:15.460255 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:15.484663 1302865 cri.go:89] found id: ""
	I1213 14:59:15.484677 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.484683 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:15.484689 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:15.484799 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:15.513604 1302865 cri.go:89] found id: ""
	I1213 14:59:15.513619 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.513626 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:15.513631 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:15.513692 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:15.543496 1302865 cri.go:89] found id: ""
	I1213 14:59:15.543510 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.543517 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:15.543524 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:15.543596 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:15.576119 1302865 cri.go:89] found id: ""
	I1213 14:59:15.576133 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.576140 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:15.576145 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:15.576207 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:15.600649 1302865 cri.go:89] found id: ""
	I1213 14:59:15.600663 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.600670 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:15.600675 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:15.600743 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:15.624956 1302865 cri.go:89] found id: ""
	I1213 14:59:15.624970 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.624977 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:15.624984 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:15.625045 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:15.649687 1302865 cri.go:89] found id: ""
	I1213 14:59:15.649700 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.649707 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:15.649717 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:15.649728 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:15.711417 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:15.711439 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:15.739859 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:15.739876 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:15.796008 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:15.796027 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:15.813254 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:15.813271 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:15.889756 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:15.881341   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.881741   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.883505   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.883986   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.885525   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:15.881341   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.881741   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.883505   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.883986   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.885525   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:18.390805 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:18.401397 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:18.401458 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:18.426479 1302865 cri.go:89] found id: ""
	I1213 14:59:18.426493 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.426501 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:18.426507 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:18.426569 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:18.451763 1302865 cri.go:89] found id: ""
	I1213 14:59:18.451777 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.451784 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:18.451788 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:18.451846 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:18.475994 1302865 cri.go:89] found id: ""
	I1213 14:59:18.476008 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.476015 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:18.476020 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:18.476080 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:18.500350 1302865 cri.go:89] found id: ""
	I1213 14:59:18.500363 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.500371 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:18.500376 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:18.500436 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:18.524126 1302865 cri.go:89] found id: ""
	I1213 14:59:18.524178 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.524186 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:18.524191 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:18.524251 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:18.552637 1302865 cri.go:89] found id: ""
	I1213 14:59:18.552650 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.552657 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:18.552668 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:18.552735 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:18.576409 1302865 cri.go:89] found id: ""
	I1213 14:59:18.576423 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.576430 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:18.576437 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:18.576448 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:18.632727 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:18.632750 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:18.649857 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:18.649874 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:18.717909 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:18.709647   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.710444   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.712120   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.712687   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.714255   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:18.709647   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.710444   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.712120   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.712687   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.714255   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:18.717920 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:18.717930 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:18.779709 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:18.779731 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:21.307289 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:21.317675 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:21.317738 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:21.357856 1302865 cri.go:89] found id: ""
	I1213 14:59:21.357870 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.357886 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:21.357892 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:21.357952 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:21.383442 1302865 cri.go:89] found id: ""
	I1213 14:59:21.383456 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.383478 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:21.383483 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:21.383550 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:21.410523 1302865 cri.go:89] found id: ""
	I1213 14:59:21.410537 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.410544 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:21.410549 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:21.410606 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:21.437275 1302865 cri.go:89] found id: ""
	I1213 14:59:21.437289 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.437296 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:21.437303 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:21.437361 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:21.460786 1302865 cri.go:89] found id: ""
	I1213 14:59:21.460800 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.460807 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:21.460813 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:21.460871 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:21.484394 1302865 cri.go:89] found id: ""
	I1213 14:59:21.484409 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.484416 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:21.484422 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:21.484481 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:21.513384 1302865 cri.go:89] found id: ""
	I1213 14:59:21.513398 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.513405 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:21.513413 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:21.513423 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:21.568892 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:21.568912 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:21.586837 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:21.586854 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:21.662678 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:21.654029   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.654719   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.656490   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.657142   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.658705   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:21.654029   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.654719   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.656490   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.657142   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.658705   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:21.662688 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:21.662699 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:21.736289 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:21.736318 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:24.267273 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:24.277337 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:24.277401 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:24.300799 1302865 cri.go:89] found id: ""
	I1213 14:59:24.300813 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.300820 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:24.300825 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:24.300883 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:24.329119 1302865 cri.go:89] found id: ""
	I1213 14:59:24.329133 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.329140 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:24.329145 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:24.329207 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:24.359906 1302865 cri.go:89] found id: ""
	I1213 14:59:24.359920 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.359927 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:24.359934 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:24.359993 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:24.388174 1302865 cri.go:89] found id: ""
	I1213 14:59:24.388188 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.388195 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:24.388201 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:24.388265 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:24.416221 1302865 cri.go:89] found id: ""
	I1213 14:59:24.416235 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.416242 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:24.416247 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:24.416306 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:24.441358 1302865 cri.go:89] found id: ""
	I1213 14:59:24.441373 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.441380 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:24.441385 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:24.441444 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:24.465868 1302865 cri.go:89] found id: ""
	I1213 14:59:24.465882 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.465889 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:24.465897 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:24.465907 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:24.522170 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:24.522189 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:24.539720 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:24.539741 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:24.605986 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:24.597621   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.598252   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.599831   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.600201   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.601630   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:24.597621   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.598252   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.599831   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.600201   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.601630   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:24.605996 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:24.606007 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:24.667358 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:24.667377 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:27.195225 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:27.205377 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:27.205438 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:27.229665 1302865 cri.go:89] found id: ""
	I1213 14:59:27.229679 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.229686 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:27.229692 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:27.229755 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:27.253927 1302865 cri.go:89] found id: ""
	I1213 14:59:27.253943 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.253950 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:27.253961 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:27.254022 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:27.277865 1302865 cri.go:89] found id: ""
	I1213 14:59:27.277879 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.277886 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:27.277891 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:27.277949 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:27.305956 1302865 cri.go:89] found id: ""
	I1213 14:59:27.305969 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.305977 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:27.305982 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:27.306041 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:27.330227 1302865 cri.go:89] found id: ""
	I1213 14:59:27.330241 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.330248 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:27.330253 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:27.330312 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:27.367738 1302865 cri.go:89] found id: ""
	I1213 14:59:27.367752 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.367759 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:27.367764 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:27.367823 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:27.400224 1302865 cri.go:89] found id: ""
	I1213 14:59:27.400239 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.400254 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:27.400262 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:27.400272 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:27.428506 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:27.428525 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:27.484755 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:27.484775 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:27.501783 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:27.501800 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:27.568006 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:27.559400   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.559958   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.561857   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.562433   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.564051   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:27.559400   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.559958   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.561857   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.562433   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.564051   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:27.568017 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:27.568029 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:30.130924 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:30.142124 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:30.142187 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:30.168272 1302865 cri.go:89] found id: ""
	I1213 14:59:30.168286 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.168301 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:30.168306 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:30.168379 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:30.198491 1302865 cri.go:89] found id: ""
	I1213 14:59:30.198507 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.198515 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:30.198520 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:30.198583 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:30.224307 1302865 cri.go:89] found id: ""
	I1213 14:59:30.224321 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.224329 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:30.224334 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:30.224398 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:30.252127 1302865 cri.go:89] found id: ""
	I1213 14:59:30.252142 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.252150 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:30.252155 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:30.252216 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:30.277686 1302865 cri.go:89] found id: ""
	I1213 14:59:30.277700 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.277707 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:30.277712 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:30.277773 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:30.302751 1302865 cri.go:89] found id: ""
	I1213 14:59:30.302766 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.302773 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:30.302779 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:30.302864 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:30.331699 1302865 cri.go:89] found id: ""
	I1213 14:59:30.331713 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.331720 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:30.331727 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:30.331741 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:30.384091 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:30.384107 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:30.448178 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:30.448197 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:30.465395 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:30.465414 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:30.525911 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:30.518056   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.518498   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.519701   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.520227   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.521907   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:30.518056   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.518498   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.519701   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.520227   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.521907   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:30.525921 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:30.525931 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:33.088366 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:33.098677 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:33.098747 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:33.123559 1302865 cri.go:89] found id: ""
	I1213 14:59:33.123574 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.123581 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:33.123587 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:33.123648 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:33.149199 1302865 cri.go:89] found id: ""
	I1213 14:59:33.149214 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.149221 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:33.149231 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:33.149294 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:33.174660 1302865 cri.go:89] found id: ""
	I1213 14:59:33.174674 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.174681 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:33.174686 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:33.174747 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:33.199686 1302865 cri.go:89] found id: ""
	I1213 14:59:33.199701 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.199709 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:33.199714 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:33.199776 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:33.223975 1302865 cri.go:89] found id: ""
	I1213 14:59:33.223990 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.223997 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:33.224002 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:33.224062 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:33.248004 1302865 cri.go:89] found id: ""
	I1213 14:59:33.248019 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.248026 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:33.248032 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:33.248099 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:33.272806 1302865 cri.go:89] found id: ""
	I1213 14:59:33.272821 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.272829 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:33.272837 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:33.272847 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:33.300705 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:33.300722 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:33.363767 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:33.363786 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:33.382421 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:33.382440 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:33.450503 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:33.442115   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.442703   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.444236   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.444803   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.446325   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:33.442115   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.442703   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.444236   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.444803   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.446325   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:33.450514 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:33.450526 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:36.015724 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:36.026901 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:36.026965 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:36.053629 1302865 cri.go:89] found id: ""
	I1213 14:59:36.053645 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.053653 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:36.053658 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:36.053722 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:36.080154 1302865 cri.go:89] found id: ""
	I1213 14:59:36.080170 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.080177 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:36.080183 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:36.080247 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:36.105197 1302865 cri.go:89] found id: ""
	I1213 14:59:36.105212 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.105219 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:36.105224 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:36.105284 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:36.129426 1302865 cri.go:89] found id: ""
	I1213 14:59:36.129440 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.129453 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:36.129458 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:36.129516 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:36.157680 1302865 cri.go:89] found id: ""
	I1213 14:59:36.157695 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.157702 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:36.157707 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:36.157768 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:36.186306 1302865 cri.go:89] found id: ""
	I1213 14:59:36.186320 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.186327 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:36.186333 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:36.186404 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:36.210490 1302865 cri.go:89] found id: ""
	I1213 14:59:36.210504 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.210511 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:36.210518 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:36.210528 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:36.265225 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:36.265244 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:36.282625 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:36.282641 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:36.356056 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:36.344168   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.345304   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.346780   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.347080   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.348284   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:36.344168   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.345304   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.346780   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.347080   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.348284   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:36.356066 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:36.356078 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:36.426572 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:36.426595 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:38.953386 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:38.964071 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:38.964149 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:38.987398 1302865 cri.go:89] found id: ""
	I1213 14:59:38.987412 1302865 logs.go:282] 0 containers: []
	W1213 14:59:38.987420 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:38.987426 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:38.987501 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:39.014333 1302865 cri.go:89] found id: ""
	I1213 14:59:39.014348 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.014355 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:39.014360 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:39.014425 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:39.041685 1302865 cri.go:89] found id: ""
	I1213 14:59:39.041699 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.041706 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:39.041711 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:39.041773 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:39.065151 1302865 cri.go:89] found id: ""
	I1213 14:59:39.065165 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.065172 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:39.065177 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:39.065236 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:39.089601 1302865 cri.go:89] found id: ""
	I1213 14:59:39.089614 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.089621 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:39.089629 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:39.089695 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:39.114392 1302865 cri.go:89] found id: ""
	I1213 14:59:39.114406 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.114413 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:39.114418 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:39.114479 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:39.139175 1302865 cri.go:89] found id: ""
	I1213 14:59:39.139188 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.139195 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:39.139204 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:39.139214 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:39.194900 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:39.194920 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:39.212516 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:39.212534 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:39.278353 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:39.270327   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.270899   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.272513   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.272875   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.274310   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:39.270327   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.270899   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.272513   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.272875   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.274310   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:39.278363 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:39.278376 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:39.339218 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:39.339237 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:41.878578 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:41.888870 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:41.888930 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:41.916325 1302865 cri.go:89] found id: ""
	I1213 14:59:41.916339 1302865 logs.go:282] 0 containers: []
	W1213 14:59:41.916346 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:41.916352 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:41.916408 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:41.940631 1302865 cri.go:89] found id: ""
	I1213 14:59:41.940646 1302865 logs.go:282] 0 containers: []
	W1213 14:59:41.940653 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:41.940658 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:41.940721 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:41.964819 1302865 cri.go:89] found id: ""
	I1213 14:59:41.964835 1302865 logs.go:282] 0 containers: []
	W1213 14:59:41.964842 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:41.964847 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:41.964909 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:41.992880 1302865 cri.go:89] found id: ""
	I1213 14:59:41.992895 1302865 logs.go:282] 0 containers: []
	W1213 14:59:41.992902 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:41.992907 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:41.992966 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:42.037181 1302865 cri.go:89] found id: ""
	I1213 14:59:42.037196 1302865 logs.go:282] 0 containers: []
	W1213 14:59:42.037203 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:42.037208 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:42.037272 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:42.066224 1302865 cri.go:89] found id: ""
	I1213 14:59:42.066240 1302865 logs.go:282] 0 containers: []
	W1213 14:59:42.066247 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:42.066253 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:42.066324 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:42.113241 1302865 cri.go:89] found id: ""
	I1213 14:59:42.113259 1302865 logs.go:282] 0 containers: []
	W1213 14:59:42.113267 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:42.113275 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:42.113288 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:42.174660 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:42.174686 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:42.197359 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:42.197391 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:42.287788 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:42.278004   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.278734   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.280708   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.281502   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.282312   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:42.278004   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.278734   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.280708   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.281502   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.282312   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:42.287799 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:42.287810 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:42.353033 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:42.353052 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:44.892059 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:44.902815 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:44.902875 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:44.927725 1302865 cri.go:89] found id: ""
	I1213 14:59:44.927740 1302865 logs.go:282] 0 containers: []
	W1213 14:59:44.927747 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:44.927752 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:44.927815 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:44.957287 1302865 cri.go:89] found id: ""
	I1213 14:59:44.957301 1302865 logs.go:282] 0 containers: []
	W1213 14:59:44.957308 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:44.957313 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:44.957371 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:44.982138 1302865 cri.go:89] found id: ""
	I1213 14:59:44.982153 1302865 logs.go:282] 0 containers: []
	W1213 14:59:44.982160 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:44.982166 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:44.982225 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:45.025671 1302865 cri.go:89] found id: ""
	I1213 14:59:45.025689 1302865 logs.go:282] 0 containers: []
	W1213 14:59:45.025697 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:45.025704 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:45.025777 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:45.070096 1302865 cri.go:89] found id: ""
	I1213 14:59:45.070112 1302865 logs.go:282] 0 containers: []
	W1213 14:59:45.070121 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:45.070126 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:45.070203 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:45.113264 1302865 cri.go:89] found id: ""
	I1213 14:59:45.113281 1302865 logs.go:282] 0 containers: []
	W1213 14:59:45.113289 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:45.113302 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:45.113391 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:45.146027 1302865 cri.go:89] found id: ""
	I1213 14:59:45.146050 1302865 logs.go:282] 0 containers: []
	W1213 14:59:45.146058 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:45.146073 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:45.146084 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:45.242018 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:45.242086 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:45.278598 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:45.278619 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:45.377053 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:45.367099   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.369078   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.369934   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.371774   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.372065   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:45.367099   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.369078   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.369934   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.371774   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.372065   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:45.377063 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:45.377073 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:45.449162 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:45.449183 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:47.980927 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:47.991934 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:47.991998 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:48.022075 1302865 cri.go:89] found id: ""
	I1213 14:59:48.022091 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.022098 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:48.022103 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:48.022169 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:48.052438 1302865 cri.go:89] found id: ""
	I1213 14:59:48.052454 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.052461 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:48.052466 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:48.052543 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:48.077918 1302865 cri.go:89] found id: ""
	I1213 14:59:48.077932 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.077940 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:48.077945 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:48.078008 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:48.107677 1302865 cri.go:89] found id: ""
	I1213 14:59:48.107691 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.107698 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:48.107703 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:48.107803 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:48.134492 1302865 cri.go:89] found id: ""
	I1213 14:59:48.134506 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.134514 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:48.134523 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:48.134616 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:48.159260 1302865 cri.go:89] found id: ""
	I1213 14:59:48.159274 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.159281 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:48.159286 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:48.159368 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:48.184905 1302865 cri.go:89] found id: ""
	I1213 14:59:48.184920 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.184927 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:48.184935 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:48.184945 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:48.240512 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:48.240535 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:48.257663 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:48.257683 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:48.323284 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:48.314914   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.315437   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.317182   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.317842   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.319544   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:48.314914   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.315437   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.317182   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.317842   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.319544   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:48.323295 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:48.323306 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:48.393384 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:48.393403 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:50.925922 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:50.936831 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:50.936895 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:50.963232 1302865 cri.go:89] found id: ""
	I1213 14:59:50.963246 1302865 logs.go:282] 0 containers: []
	W1213 14:59:50.963253 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:50.963258 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:50.963354 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:50.993552 1302865 cri.go:89] found id: ""
	I1213 14:59:50.993566 1302865 logs.go:282] 0 containers: []
	W1213 14:59:50.993572 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:50.993578 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:50.993639 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:51.021945 1302865 cri.go:89] found id: ""
	I1213 14:59:51.021978 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.021986 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:51.021991 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:51.022051 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:51.049002 1302865 cri.go:89] found id: ""
	I1213 14:59:51.049017 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.049024 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:51.049029 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:51.049113 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:51.075979 1302865 cri.go:89] found id: ""
	I1213 14:59:51.075995 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.076003 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:51.076008 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:51.076071 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:51.101633 1302865 cri.go:89] found id: ""
	I1213 14:59:51.101648 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.101656 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:51.101661 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:51.101724 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:51.128983 1302865 cri.go:89] found id: ""
	I1213 14:59:51.128999 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.129007 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:51.129015 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:51.129025 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:51.185511 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:51.185538 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:51.203284 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:51.203306 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:51.265859 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:51.257020   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.257804   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.259431   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.260025   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.261672   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:51.257020   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.257804   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.259431   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.260025   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.261672   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:51.265869 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:51.265880 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:51.328096 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:51.328116 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:53.857136 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:53.867344 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:53.867405 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:53.890843 1302865 cri.go:89] found id: ""
	I1213 14:59:53.890857 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.890864 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:53.890869 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:53.890927 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:53.915236 1302865 cri.go:89] found id: ""
	I1213 14:59:53.915250 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.915258 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:53.915263 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:53.915341 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:53.939500 1302865 cri.go:89] found id: ""
	I1213 14:59:53.939515 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.939523 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:53.939528 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:53.939588 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:53.968671 1302865 cri.go:89] found id: ""
	I1213 14:59:53.968686 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.968693 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:53.968698 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:53.968766 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:53.992869 1302865 cri.go:89] found id: ""
	I1213 14:59:53.992883 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.992895 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:53.992900 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:53.992962 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:54.020494 1302865 cri.go:89] found id: ""
	I1213 14:59:54.020510 1302865 logs.go:282] 0 containers: []
	W1213 14:59:54.020518 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:54.020524 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:54.020587 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:54.047224 1302865 cri.go:89] found id: ""
	I1213 14:59:54.047240 1302865 logs.go:282] 0 containers: []
	W1213 14:59:54.047247 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:54.047256 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:54.047268 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:54.064625 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:54.064643 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:54.131051 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:54.122613   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.123186   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.124913   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.125456   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.126931   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:54.122613   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.123186   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.124913   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.125456   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.126931   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:54.131061 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:54.131072 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:54.198481 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:54.198502 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:54.229657 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:54.229673 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:56.788389 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:56.798893 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:56.798978 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:56.825463 1302865 cri.go:89] found id: ""
	I1213 14:59:56.825479 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.825486 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:56.825491 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:56.825569 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:56.850902 1302865 cri.go:89] found id: ""
	I1213 14:59:56.850916 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.850923 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:56.850928 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:56.850997 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:56.875729 1302865 cri.go:89] found id: ""
	I1213 14:59:56.875743 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.875750 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:56.875755 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:56.875812 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:56.904598 1302865 cri.go:89] found id: ""
	I1213 14:59:56.904612 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.904619 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:56.904624 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:56.904684 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:56.929612 1302865 cri.go:89] found id: ""
	I1213 14:59:56.929626 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.929633 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:56.929639 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:56.929696 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:56.954323 1302865 cri.go:89] found id: ""
	I1213 14:59:56.954337 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.954345 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:56.954350 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:56.954411 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:56.978916 1302865 cri.go:89] found id: ""
	I1213 14:59:56.978930 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.978937 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:56.978944 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:56.978955 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:56.996271 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:56.996290 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:57.067201 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:57.058255   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.058923   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.060482   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.061159   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.062082   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:57.058255   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.058923   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.060482   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.061159   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.062082   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:57.067214 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:57.067227 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:57.129467 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:57.129486 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:57.160756 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:57.160773 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:59.726541 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:59.737128 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:59.737192 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:59.762034 1302865 cri.go:89] found id: ""
	I1213 14:59:59.762050 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.762057 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:59.762063 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:59.762136 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:59.786710 1302865 cri.go:89] found id: ""
	I1213 14:59:59.786724 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.786731 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:59.786738 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:59.786799 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:59.823635 1302865 cri.go:89] found id: ""
	I1213 14:59:59.823649 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.823656 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:59.823661 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:59.823721 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:59.853555 1302865 cri.go:89] found id: ""
	I1213 14:59:59.853568 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.853576 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:59.853580 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:59.853639 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:59.878766 1302865 cri.go:89] found id: ""
	I1213 14:59:59.878781 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.878788 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:59.878793 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:59.878853 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:59.904985 1302865 cri.go:89] found id: ""
	I1213 14:59:59.904999 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.905006 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:59.905012 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:59.905084 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:59.929868 1302865 cri.go:89] found id: ""
	I1213 14:59:59.929882 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.929889 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:59.929896 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:59.929906 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:59.991222 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:59.991242 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:00:00.071719 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 15:00:00.071740 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:00:00.209914 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 15:00:00.209948 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:00:00.266871 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:00:00.266916 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:00:00.606023 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 15:00:00.575459   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.581155   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.582626   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.584864   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.585965   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 15:00:00.575459   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.581155   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.582626   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.584864   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.585965   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:00:03.107691 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:00:03.118897 1302865 kubeadm.go:602] duration metric: took 4m4.796487812s to restartPrimaryControlPlane
	W1213 15:00:03.118966 1302865 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 15:00:03.119044 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 15:00:03.535783 1302865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 15:00:03.550485 1302865 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 15:00:03.558915 1302865 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 15:00:03.558988 1302865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 15:00:03.567415 1302865 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 15:00:03.567426 1302865 kubeadm.go:158] found existing configuration files:
	
	I1213 15:00:03.567481 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 15:00:03.576037 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 15:00:03.576097 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 15:00:03.584074 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 15:00:03.592593 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 15:00:03.592651 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 15:00:03.601062 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 15:00:03.609623 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 15:00:03.609683 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 15:00:03.617551 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 15:00:03.625819 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 15:00:03.625879 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 15:00:03.634092 1302865 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 15:00:03.677773 1302865 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 15:00:03.677823 1302865 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 15:00:03.751455 1302865 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 15:00:03.751520 1302865 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 15:00:03.751555 1302865 kubeadm.go:319] OS: Linux
	I1213 15:00:03.751599 1302865 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 15:00:03.751646 1302865 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 15:00:03.751692 1302865 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 15:00:03.751738 1302865 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 15:00:03.751785 1302865 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 15:00:03.751832 1302865 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 15:00:03.751877 1302865 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 15:00:03.751923 1302865 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 15:00:03.751968 1302865 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 15:00:03.818698 1302865 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 15:00:03.818804 1302865 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 15:00:03.818894 1302865 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 15:00:03.825177 1302865 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 15:00:03.828382 1302865 out.go:252]   - Generating certificates and keys ...
	I1213 15:00:03.828484 1302865 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 15:00:03.828568 1302865 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 15:00:03.828657 1302865 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 15:00:03.828722 1302865 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 15:00:03.828813 1302865 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 15:00:03.828870 1302865 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 15:00:03.828941 1302865 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 15:00:03.829005 1302865 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 15:00:03.829084 1302865 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 15:00:03.829160 1302865 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 15:00:03.829199 1302865 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 15:00:03.829258 1302865 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 15:00:04.177571 1302865 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 15:00:04.342429 1302865 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 15:00:04.668058 1302865 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 15:00:04.760444 1302865 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 15:00:05.013305 1302865 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 15:00:05.014367 1302865 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 15:00:05.019071 1302865 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 15:00:05.022340 1302865 out.go:252]   - Booting up control plane ...
	I1213 15:00:05.022442 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 15:00:05.022520 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 15:00:05.022586 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 15:00:05.042894 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 15:00:05.043146 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 15:00:05.050754 1302865 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 15:00:05.051023 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 15:00:05.051065 1302865 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 15:00:05.191860 1302865 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 15:00:05.191979 1302865 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 15:04:05.190333 1302865 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000252344s
	I1213 15:04:05.190362 1302865 kubeadm.go:319] 
	I1213 15:04:05.190420 1302865 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 15:04:05.190453 1302865 kubeadm.go:319] 	- The kubelet is not running
	I1213 15:04:05.190557 1302865 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 15:04:05.190562 1302865 kubeadm.go:319] 
	I1213 15:04:05.190665 1302865 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 15:04:05.190696 1302865 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 15:04:05.190726 1302865 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 15:04:05.190729 1302865 kubeadm.go:319] 
	I1213 15:04:05.195506 1302865 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 15:04:05.195924 1302865 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 15:04:05.196033 1302865 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 15:04:05.196267 1302865 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 15:04:05.196271 1302865 kubeadm.go:319] 
	I1213 15:04:05.196339 1302865 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 15:04:05.196471 1302865 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000252344s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 15:04:05.196557 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 15:04:05.613572 1302865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 15:04:05.627532 1302865 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 15:04:05.627586 1302865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 15:04:05.635470 1302865 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 15:04:05.635487 1302865 kubeadm.go:158] found existing configuration files:
	
	I1213 15:04:05.635549 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 15:04:05.643770 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 15:04:05.643832 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 15:04:05.651305 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 15:04:05.659066 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 15:04:05.659119 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 15:04:05.666497 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 15:04:05.674867 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 15:04:05.674922 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 15:04:05.682604 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 15:04:05.690488 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 15:04:05.690547 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 15:04:05.697863 1302865 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 15:04:05.737903 1302865 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 15:04:05.738332 1302865 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 15:04:05.824821 1302865 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 15:04:05.824881 1302865 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 15:04:05.824914 1302865 kubeadm.go:319] OS: Linux
	I1213 15:04:05.824955 1302865 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 15:04:05.825000 1302865 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 15:04:05.825043 1302865 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 15:04:05.825103 1302865 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 15:04:05.825147 1302865 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 15:04:05.825200 1302865 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 15:04:05.825250 1302865 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 15:04:05.825294 1302865 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 15:04:05.825336 1302865 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 15:04:05.892296 1302865 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 15:04:05.892418 1302865 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 15:04:05.892526 1302865 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 15:04:05.898143 1302865 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 15:04:05.903540 1302865 out.go:252]   - Generating certificates and keys ...
	I1213 15:04:05.903629 1302865 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 15:04:05.903698 1302865 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 15:04:05.903775 1302865 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 15:04:05.903837 1302865 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 15:04:05.903908 1302865 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 15:04:05.903958 1302865 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 15:04:05.904021 1302865 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 15:04:05.904084 1302865 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 15:04:05.904160 1302865 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 15:04:05.904234 1302865 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 15:04:05.904275 1302865 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 15:04:05.904330 1302865 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 15:04:05.992570 1302865 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 15:04:06.166280 1302865 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 15:04:06.244452 1302865 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 15:04:06.386969 1302865 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 15:04:06.630629 1302865 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 15:04:06.631865 1302865 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 15:04:06.635872 1302865 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 15:04:06.639278 1302865 out.go:252]   - Booting up control plane ...
	I1213 15:04:06.639389 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 15:04:06.639462 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 15:04:06.639523 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 15:04:06.659049 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 15:04:06.659158 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 15:04:06.666661 1302865 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 15:04:06.666977 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 15:04:06.667151 1302865 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 15:04:06.810085 1302865 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 15:04:06.810198 1302865 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 15:08:06.809904 1302865 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000225024s
	I1213 15:08:06.809924 1302865 kubeadm.go:319] 
	I1213 15:08:06.810412 1302865 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 15:08:06.810499 1302865 kubeadm.go:319] 	- The kubelet is not running
	I1213 15:08:06.810921 1302865 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 15:08:06.810931 1302865 kubeadm.go:319] 
	I1213 15:08:06.811146 1302865 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 15:08:06.811211 1302865 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 15:08:06.811291 1302865 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 15:08:06.811302 1302865 kubeadm.go:319] 
	I1213 15:08:06.814720 1302865 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 15:08:06.816724 1302865 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 15:08:06.816881 1302865 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 15:08:06.817212 1302865 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 15:08:06.817216 1302865 kubeadm.go:319] 
	I1213 15:08:06.817309 1302865 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 15:08:06.817355 1302865 kubeadm.go:403] duration metric: took 12m8.532180676s to StartCluster
	I1213 15:08:06.817385 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:08:06.817448 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:08:06.841821 1302865 cri.go:89] found id: ""
	I1213 15:08:06.841835 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.841841 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 15:08:06.841847 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:08:06.841909 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:08:06.865102 1302865 cri.go:89] found id: ""
	I1213 15:08:06.865122 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.865129 1302865 logs.go:284] No container was found matching "etcd"
	I1213 15:08:06.865134 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:08:06.865194 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:08:06.889354 1302865 cri.go:89] found id: ""
	I1213 15:08:06.889369 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.889376 1302865 logs.go:284] No container was found matching "coredns"
	I1213 15:08:06.889381 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:08:06.889444 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:08:06.916987 1302865 cri.go:89] found id: ""
	I1213 15:08:06.917001 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.917008 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 15:08:06.917014 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:08:06.917074 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:08:06.941966 1302865 cri.go:89] found id: ""
	I1213 15:08:06.941980 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.941987 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:08:06.941992 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:08:06.942053 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:08:06.967555 1302865 cri.go:89] found id: ""
	I1213 15:08:06.967570 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.967576 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 15:08:06.967582 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:08:06.967642 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:08:06.990643 1302865 cri.go:89] found id: ""
	I1213 15:08:06.990661 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.990669 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 15:08:06.990677 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 15:08:06.990688 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:08:07.046948 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 15:08:07.046967 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:08:07.064271 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:08:07.064292 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:08:07.156681 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 15:08:07.142614   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.149501   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.150219   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.151350   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.152858   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 15:08:07.142614   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.149501   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.150219   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.151350   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.152858   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:08:07.156693 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 15:08:07.156703 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:08:07.225180 1302865 logs.go:123] Gathering logs for container status ...
	I1213 15:08:07.225205 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 15:08:07.257292 1302865 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000225024s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 15:08:07.257342 1302865 out.go:285] * 
	W1213 15:08:07.257449 1302865 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000225024s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 15:08:07.257519 1302865 out.go:285] * 
	W1213 15:08:07.259853 1302865 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 15:08:07.265906 1302865 out.go:203] 
	W1213 15:08:07.268865 1302865 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000225024s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 15:08:07.268911 1302865 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 15:08:07.268933 1302865 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 15:08:07.272012 1302865 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 15:08:16 functional-562018 containerd[9685]: time="2025-12-13T15:08:16.614640453Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-562018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:08:17 functional-562018 containerd[9685]: time="2025-12-13T15:08:17.594699770Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-562018\""
	Dec 13 15:08:17 functional-562018 containerd[9685]: time="2025-12-13T15:08:17.603547510Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-562018\""
	Dec 13 15:08:17 functional-562018 containerd[9685]: time="2025-12-13T15:08:17.603653813Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 13 15:08:17 functional-562018 containerd[9685]: time="2025-12-13T15:08:17.607908789Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-562018\" returns successfully"
	Dec 13 15:08:17 functional-562018 containerd[9685]: time="2025-12-13T15:08:17.989472917Z" level=info msg="No images store for sha256:3fb21f6d7fe9fd863c3548cb9498b8e552e958f0a50edc71e300f38a249a8021"
	Dec 13 15:08:17 functional-562018 containerd[9685]: time="2025-12-13T15:08:17.991836514Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-562018\""
	Dec 13 15:08:17 functional-562018 containerd[9685]: time="2025-12-13T15:08:17.999814739Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:08:18 functional-562018 containerd[9685]: time="2025-12-13T15:08:18.000343226Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-562018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.424371600Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-562018\""
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.427299481Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-562018\""
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.429590825Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.438723433Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-562018\" returns successfully"
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.738866011Z" level=info msg="No images store for sha256:3fb21f6d7fe9fd863c3548cb9498b8e552e958f0a50edc71e300f38a249a8021"
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.741155321Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-562018\""
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.748278873Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.748608153Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-562018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:08:20 functional-562018 containerd[9685]: time="2025-12-13T15:08:20.747498767Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-562018\""
	Dec 13 15:08:20 functional-562018 containerd[9685]: time="2025-12-13T15:08:20.750124437Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-562018\""
	Dec 13 15:08:20 functional-562018 containerd[9685]: time="2025-12-13T15:08:20.752467907Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 13 15:08:20 functional-562018 containerd[9685]: time="2025-12-13T15:08:20.765182475Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-562018\" returns successfully"
	Dec 13 15:08:21 functional-562018 containerd[9685]: time="2025-12-13T15:08:21.628092462Z" level=info msg="No images store for sha256:bffe89cb060c176804db60dc616d4e1117e4c9cbe423e0274bf52a76645edb04"
	Dec 13 15:08:21 functional-562018 containerd[9685]: time="2025-12-13T15:08:21.630292191Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-562018\""
	Dec 13 15:08:21 functional-562018 containerd[9685]: time="2025-12-13T15:08:21.637226743Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:08:21 functional-562018 containerd[9685]: time="2025-12-13T15:08:21.637535149Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-562018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 15:10:06.388702   23348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:10:06.389126   23348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:10:06.390509   23348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:10:06.391212   23348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:10:06.392947   23348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.759368] overlayfs: idmapped layers are currently not supported
	[Dec13 13:37] overlayfs: idmapped layers are currently not supported
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 15:10:06 up  6:52,  0 user,  load average: 0.46, 0.40, 0.50
	Linux functional-562018 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 15:10:03 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:10:04 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 476.
	Dec 13 15:10:04 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:10:04 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:10:04 functional-562018 kubelet[23183]: E1213 15:10:04.135351   23183 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:10:04 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:10:04 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:10:04 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 477.
	Dec 13 15:10:04 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:10:04 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:10:04 functional-562018 kubelet[23227]: E1213 15:10:04.894908   23227 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:10:04 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:10:04 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:10:05 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 478.
	Dec 13 15:10:05 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:10:05 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:10:05 functional-562018 kubelet[23263]: E1213 15:10:05.657169   23263 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:10:05 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:10:05 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:10:06 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 479.
	Dec 13 15:10:06 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:10:06 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:10:06 functional-562018 kubelet[23352]: E1213 15:10:06.398487   23352 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:10:06 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:10:06 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018: exit status 2 (356.095301ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-562018" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-562018 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-562018 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (68.675541ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-562018 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-562018 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-562018 describe po hello-node-connect: exit status 1 (67.254269ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-562018 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-562018 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-562018 logs -l app=hello-node-connect: exit status 1 (69.307755ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-562018 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-562018 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-562018 describe svc hello-node-connect: exit status 1 (63.342732ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-562018 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-562018
helpers_test.go:244: (dbg) docker inspect functional-562018:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
	        "Created": "2025-12-13T14:41:15.451086653Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1291703,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T14:41:15.527927053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hostname",
	        "HostsPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hosts",
	        "LogPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648-json.log",
	        "Name": "/functional-562018",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-562018:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-562018",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
	                "LowerDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-562018",
	                "Source": "/var/lib/docker/volumes/functional-562018/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-562018",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-562018",
	                "name.minikube.sigs.k8s.io": "functional-562018",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f4b22297a29553cdd0dbc4eaa766abcdb1e67465ee18f1e2f5f9917dc8cf6d08",
	            "SandboxKey": "/var/run/docker/netns/f4b22297a295",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33918"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33919"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33922"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33920"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33921"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-562018": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:f3:95:ff:30:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd2e7aed753870546b4bccdeed8073ad5795c14334e906d06b964f72dc448c38",
	                    "EndpointID": "a0e947c1e40f773105c811b67b7d1d63f19d3a20060380bbde944bf9bfe39be5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-562018",
	                        "2cd1277ca783"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018: exit status 2 (319.005417ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-562018 ssh sudo cat /usr/share/ca-certificates/1252934.pem                                                                                           │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ ssh     │ functional-562018 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ image   │ functional-562018 image ls                                                                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ ssh     │ functional-562018 ssh sudo cat /etc/ssl/certs/12529342.pem                                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ image   │ functional-562018 image save kicbase/echo-server:functional-562018 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ ssh     │ functional-562018 ssh sudo cat /usr/share/ca-certificates/12529342.pem                                                                                          │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ image   │ functional-562018 image rm kicbase/echo-server:functional-562018 --alsologtostderr                                                                              │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ ssh     │ functional-562018 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ image   │ functional-562018 image ls                                                                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ ssh     │ functional-562018 ssh sudo cat /etc/test/nested/copy/1252934/hosts                                                                                              │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ image   │ functional-562018 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ service │ functional-562018 service list                                                                                                                                  │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │                     │
	│ image   │ functional-562018 image ls                                                                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ service │ functional-562018 service list -o json                                                                                                                          │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │                     │
	│ image   │ functional-562018 image save --daemon kicbase/echo-server:functional-562018 --alsologtostderr                                                                   │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ service │ functional-562018 service --namespace=default --https --url hello-node                                                                                          │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │                     │
	│ service │ functional-562018 service hello-node --url --format={{.IP}}                                                                                                     │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │                     │
	│ ssh     │ functional-562018 ssh echo hello                                                                                                                                │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ service │ functional-562018 service hello-node --url                                                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │                     │
	│ ssh     │ functional-562018 ssh cat /etc/hostname                                                                                                                         │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ tunnel  │ functional-562018 tunnel --alsologtostderr                                                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │                     │
	│ tunnel  │ functional-562018 tunnel --alsologtostderr                                                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │                     │
	│ tunnel  │ functional-562018 tunnel --alsologtostderr                                                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │                     │
	│ addons  │ functional-562018 addons list                                                                                                                                   │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ addons  │ functional-562018 addons list -o json                                                                                                                           │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 14:55:53
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 14:55:53.719613 1302865 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:55:53.719728 1302865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:55:53.719732 1302865 out.go:374] Setting ErrFile to fd 2...
	I1213 14:55:53.719735 1302865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:55:53.719985 1302865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 14:55:53.720335 1302865 out.go:368] Setting JSON to false
	I1213 14:55:53.721190 1302865 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23903,"bootTime":1765613851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 14:55:53.721260 1302865 start.go:143] virtualization:  
	I1213 14:55:53.724694 1302865 out.go:179] * [functional-562018] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 14:55:53.728380 1302865 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 14:55:53.728496 1302865 notify.go:221] Checking for updates...
	I1213 14:55:53.734124 1302865 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:55:53.736928 1302865 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:55:53.739728 1302865 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 14:55:53.742545 1302865 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 14:55:53.745302 1302865 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 14:55:53.748618 1302865 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:55:53.748719 1302865 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:55:53.782535 1302865 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 14:55:53.782649 1302865 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:55:53.845662 1302865 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 14:55:53.829246857 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:55:53.845758 1302865 docker.go:319] overlay module found
	I1213 14:55:53.849849 1302865 out.go:179] * Using the docker driver based on existing profile
	I1213 14:55:53.852762 1302865 start.go:309] selected driver: docker
	I1213 14:55:53.852774 1302865 start.go:927] validating driver "docker" against &{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:55:53.852875 1302865 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 14:55:53.852984 1302865 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:55:53.929886 1302865 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 14:55:53.921020705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:55:53.930294 1302865 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 14:55:53.930319 1302865 cni.go:84] Creating CNI manager for ""
	I1213 14:55:53.930367 1302865 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 14:55:53.930406 1302865 start.go:353] cluster config:
	{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:55:53.933662 1302865 out.go:179] * Starting "functional-562018" primary control-plane node in "functional-562018" cluster
	I1213 14:55:53.936743 1302865 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 14:55:53.939760 1302865 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 14:55:53.942676 1302865 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 14:55:53.942716 1302865 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 14:55:53.942732 1302865 cache.go:65] Caching tarball of preloaded images
	I1213 14:55:53.942759 1302865 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 14:55:53.942845 1302865 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 14:55:53.942855 1302865 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 14:55:53.942970 1302865 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/config.json ...
	I1213 14:55:53.962568 1302865 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 14:55:53.962579 1302865 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 14:55:53.962597 1302865 cache.go:243] Successfully downloaded all kic artifacts
	I1213 14:55:53.962628 1302865 start.go:360] acquireMachinesLock for functional-562018: {Name:mk6a7956e4fce5d8e0f4d6fe039ab67ad6cd688b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 14:55:53.962689 1302865 start.go:364] duration metric: took 45.029µs to acquireMachinesLock for "functional-562018"
	I1213 14:55:53.962707 1302865 start.go:96] Skipping create...Using existing machine configuration
	I1213 14:55:53.962711 1302865 fix.go:54] fixHost starting: 
	I1213 14:55:53.962972 1302865 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:55:53.980087 1302865 fix.go:112] recreateIfNeeded on functional-562018: state=Running err=<nil>
	W1213 14:55:53.980106 1302865 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 14:55:53.983261 1302865 out.go:252] * Updating the running docker "functional-562018" container ...
	I1213 14:55:53.983285 1302865 machine.go:94] provisionDockerMachine start ...
	I1213 14:55:53.983388 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.000833 1302865 main.go:143] libmachine: Using SSH client type: native
	I1213 14:55:54.001170 1302865 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:55:54.001177 1302865 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 14:55:54.155013 1302865 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-562018
	
	I1213 14:55:54.155027 1302865 ubuntu.go:182] provisioning hostname "functional-562018"
	I1213 14:55:54.155091 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.172804 1302865 main.go:143] libmachine: Using SSH client type: native
	I1213 14:55:54.173100 1302865 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:55:54.173108 1302865 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-562018 && echo "functional-562018" | sudo tee /etc/hostname
	I1213 14:55:54.335232 1302865 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-562018
	
	I1213 14:55:54.335302 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.353315 1302865 main.go:143] libmachine: Using SSH client type: native
	I1213 14:55:54.353625 1302865 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:55:54.353638 1302865 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-562018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-562018/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-562018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 14:55:54.503602 1302865 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 14:55:54.503618 1302865 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 14:55:54.503648 1302865 ubuntu.go:190] setting up certificates
	I1213 14:55:54.503664 1302865 provision.go:84] configureAuth start
	I1213 14:55:54.503732 1302865 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
	I1213 14:55:54.520737 1302865 provision.go:143] copyHostCerts
	I1213 14:55:54.520806 1302865 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 14:55:54.520813 1302865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 14:55:54.520892 1302865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 14:55:54.520992 1302865 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 14:55:54.520996 1302865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 14:55:54.521022 1302865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 14:55:54.521079 1302865 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 14:55:54.521082 1302865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 14:55:54.521105 1302865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 14:55:54.521157 1302865 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.functional-562018 san=[127.0.0.1 192.168.49.2 functional-562018 localhost minikube]
	I1213 14:55:54.737947 1302865 provision.go:177] copyRemoteCerts
	I1213 14:55:54.738007 1302865 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 14:55:54.738047 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.756271 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:54.864730 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 14:55:54.885080 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 14:55:54.903456 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 14:55:54.921228 1302865 provision.go:87] duration metric: took 417.552003ms to configureAuth
	I1213 14:55:54.921245 1302865 ubuntu.go:206] setting minikube options for container-runtime
	I1213 14:55:54.921445 1302865 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:55:54.921451 1302865 machine.go:97] duration metric: took 938.161957ms to provisionDockerMachine
	I1213 14:55:54.921458 1302865 start.go:293] postStartSetup for "functional-562018" (driver="docker")
	I1213 14:55:54.921469 1302865 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 14:55:54.921526 1302865 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 14:55:54.921569 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.939146 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:55.043619 1302865 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 14:55:55.047116 1302865 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 14:55:55.047136 1302865 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 14:55:55.047147 1302865 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 14:55:55.047201 1302865 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 14:55:55.047279 1302865 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 14:55:55.047377 1302865 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts -> hosts in /etc/test/nested/copy/1252934
	I1213 14:55:55.047422 1302865 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1252934
	I1213 14:55:55.055022 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 14:55:55.072651 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts --> /etc/test/nested/copy/1252934/hosts (40 bytes)
	I1213 14:55:55.090146 1302865 start.go:296] duration metric: took 168.672467ms for postStartSetup
	I1213 14:55:55.090222 1302865 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 14:55:55.090277 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:55.110519 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:55.212743 1302865 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 14:55:55.217665 1302865 fix.go:56] duration metric: took 1.254946074s for fixHost
	I1213 14:55:55.217694 1302865 start.go:83] releasing machines lock for "functional-562018", held for 1.254985507s
	I1213 14:55:55.217771 1302865 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
	I1213 14:55:55.234536 1302865 ssh_runner.go:195] Run: cat /version.json
	I1213 14:55:55.234580 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:55.234841 1302865 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 14:55:55.234904 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:55.258034 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:55.263005 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:55.363489 1302865 ssh_runner.go:195] Run: systemctl --version
	I1213 14:55:55.466608 1302865 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 14:55:55.470983 1302865 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 14:55:55.471044 1302865 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 14:55:55.478685 1302865 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 14:55:55.478700 1302865 start.go:496] detecting cgroup driver to use...
	I1213 14:55:55.478730 1302865 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 14:55:55.478776 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 14:55:55.494349 1302865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 14:55:55.507276 1302865 docker.go:218] disabling cri-docker service (if available) ...
	I1213 14:55:55.507360 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 14:55:55.523374 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 14:55:55.537388 1302865 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 14:55:55.656533 1302865 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 14:55:55.769801 1302865 docker.go:234] disabling docker service ...
	I1213 14:55:55.769857 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 14:55:55.784548 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 14:55:55.797129 1302865 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 14:55:55.915684 1302865 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 14:55:56.027646 1302865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 14:55:56.050399 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 14:55:56.066005 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 14:55:56.076093 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 14:55:56.085556 1302865 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 14:55:56.085627 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 14:55:56.094545 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 14:55:56.104197 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 14:55:56.114269 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 14:55:56.123172 1302865 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 14:55:56.132178 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 14:55:56.141074 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 14:55:56.150470 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 14:55:56.160063 1302865 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 14:55:56.167903 1302865 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 14:55:56.175659 1302865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:55:56.295844 1302865 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 14:55:56.441580 1302865 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 14:55:56.441654 1302865 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 14:55:56.445551 1302865 start.go:564] Will wait 60s for crictl version
	I1213 14:55:56.445607 1302865 ssh_runner.go:195] Run: which crictl
	I1213 14:55:56.449128 1302865 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 14:55:56.473587 1302865 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 14:55:56.473654 1302865 ssh_runner.go:195] Run: containerd --version
	I1213 14:55:56.493885 1302865 ssh_runner.go:195] Run: containerd --version
	I1213 14:55:56.518032 1302865 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 14:55:56.521077 1302865 cli_runner.go:164] Run: docker network inspect functional-562018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 14:55:56.537369 1302865 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 14:55:56.544433 1302865 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 14:55:56.547248 1302865 kubeadm.go:884] updating cluster {Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 14:55:56.547410 1302865 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 14:55:56.547500 1302865 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:55:56.572443 1302865 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 14:55:56.572458 1302865 containerd.go:534] Images already preloaded, skipping extraction
	I1213 14:55:56.572525 1302865 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:55:56.603700 1302865 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 14:55:56.603712 1302865 cache_images.go:86] Images are preloaded, skipping loading
	I1213 14:55:56.603718 1302865 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 14:55:56.603824 1302865 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-562018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 14:55:56.603888 1302865 ssh_runner.go:195] Run: sudo crictl info
	I1213 14:55:56.640969 1302865 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 14:55:56.640988 1302865 cni.go:84] Creating CNI manager for ""
	I1213 14:55:56.640997 1302865 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 14:55:56.641011 1302865 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 14:55:56.641033 1302865 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-562018 NodeName:functional-562018 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 14:55:56.641163 1302865 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-562018"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 14:55:56.641238 1302865 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 14:55:56.649442 1302865 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 14:55:56.649507 1302865 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 14:55:56.657006 1302865 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 14:55:56.669728 1302865 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 14:55:56.682334 1302865 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1213 14:55:56.694926 1302865 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 14:55:56.698838 1302865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:55:56.837238 1302865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:55:57.584722 1302865 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018 for IP: 192.168.49.2
	I1213 14:55:57.584733 1302865 certs.go:195] generating shared ca certs ...
	I1213 14:55:57.584753 1302865 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:55:57.584897 1302865 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 14:55:57.584947 1302865 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 14:55:57.584954 1302865 certs.go:257] generating profile certs ...
	I1213 14:55:57.585039 1302865 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key
	I1213 14:55:57.585090 1302865 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key.d0505aee
	I1213 14:55:57.585124 1302865 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key
	I1213 14:55:57.585235 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 14:55:57.585272 1302865 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 14:55:57.585280 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 14:55:57.585307 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 14:55:57.585330 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 14:55:57.585354 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 14:55:57.585399 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 14:55:57.591362 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 14:55:57.616349 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 14:55:57.635438 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 14:55:57.655371 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 14:55:57.672503 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 14:55:57.689594 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 14:55:57.706530 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 14:55:57.723556 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 14:55:57.740287 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 14:55:57.757304 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 14:55:57.774649 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 14:55:57.792687 1302865 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 14:55:57.805822 1302865 ssh_runner.go:195] Run: openssl version
	I1213 14:55:57.812225 1302865 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 14:55:57.819503 1302865 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 14:55:57.826726 1302865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 14:55:57.830446 1302865 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 14:55:57.830502 1302865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 14:55:57.871253 1302865 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 14:55:57.878814 1302865 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:55:57.886029 1302865 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 14:55:57.893560 1302865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:55:57.897283 1302865 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:55:57.897343 1302865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:55:57.938225 1302865 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 14:55:57.946132 1302865 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 14:55:57.953318 1302865 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 14:55:57.960779 1302865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 14:55:57.964616 1302865 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 14:55:57.964674 1302865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 14:55:58.013928 1302865 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 14:55:58.021993 1302865 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 14:55:58.026144 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 14:55:58.067380 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 14:55:58.114887 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 14:55:58.156572 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 14:55:58.199117 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 14:55:58.241809 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 14:55:58.285184 1302865 kubeadm.go:401] StartCluster: {Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:55:58.285266 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 14:55:58.285327 1302865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 14:55:58.314259 1302865 cri.go:89] found id: ""
	I1213 14:55:58.314322 1302865 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 14:55:58.322386 1302865 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 14:55:58.322396 1302865 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 14:55:58.322453 1302865 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 14:55:58.329880 1302865 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:55:58.330377 1302865 kubeconfig.go:125] found "functional-562018" server: "https://192.168.49.2:8441"
	I1213 14:55:58.331729 1302865 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 14:55:58.341644 1302865 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 14:41:23.876598830 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 14:55:56.689854034 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 14:55:58.341663 1302865 kubeadm.go:1161] stopping kube-system containers ...
	I1213 14:55:58.341678 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1213 14:55:58.341741 1302865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 14:55:58.374972 1302865 cri.go:89] found id: ""
	I1213 14:55:58.375050 1302865 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 14:55:58.396016 1302865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 14:55:58.404525 1302865 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 13 14:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 13 14:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 13 14:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 13 14:45 /etc/kubernetes/scheduler.conf
	
	I1213 14:55:58.404584 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 14:55:58.412946 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 14:55:58.420580 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:55:58.420635 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 14:55:58.428221 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 14:55:58.435971 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:55:58.436028 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 14:55:58.443530 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 14:55:58.451393 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:55:58.451448 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 14:55:58.458854 1302865 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 14:55:58.466605 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:55:58.520413 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:55:59.744405 1302865 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.223964216s)
	I1213 14:55:59.744467 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:55:59.946438 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:56:00.013725 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:56:00.113319 1302865 api_server.go:52] waiting for apiserver process to appear ...
	I1213 14:56:00.114955 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:00.613579 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:01.114177 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:01.613592 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:02.113571 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:02.613593 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:03.113840 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:03.613569 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:04.114249 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:04.613852 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:05.113537 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:05.613696 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:06.113540 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:06.614342 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:07.113785 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:07.613457 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:08.114283 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:08.613596 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:09.113597 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:09.614352 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:10.114532 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:10.613598 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:11.114365 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:11.614158 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:12.113539 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:12.613531 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:13.113591 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:13.613463 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:14.114527 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:14.614435 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:15.113510 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:15.614373 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:16.114388 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:16.613507 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:17.113567 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:17.614369 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:18.113844 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:18.613714 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:19.114404 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:19.614169 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:20.114541 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:20.613650 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:21.113498 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:21.613589 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:22.113591 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:22.613522 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:23.114240 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:23.614475 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:24.113893 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:24.614467 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:25.114531 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:25.613526 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:26.114346 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:26.614504 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:27.113518 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:27.614286 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:28.114181 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:28.613958 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:29.113601 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:29.614343 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:30.114309 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:30.614109 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:31.114271 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:31.613510 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:32.114261 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:32.614199 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:33.114060 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:33.614237 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:34.114371 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:34.614467 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:35.114182 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:35.613614 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:36.113542 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:36.614402 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:37.114233 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:37.613522 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:38.113599 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:38.613584 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:39.114045 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:39.613569 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:40.113521 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:40.613504 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:41.113503 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:41.614239 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:42.113697 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:42.614293 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:43.113591 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:43.614231 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:44.114413 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:44.614537 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:45.114187 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:45.613592 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:46.113667 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:46.613755 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:47.113597 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:47.614262 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:48.113463 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:48.613700 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:49.113578 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:49.614192 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:50.113501 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:50.613492 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:51.114160 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:51.613924 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:52.114491 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:52.613532 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:53.113608 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:53.613620 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:54.114432 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:54.614359 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:55.114461 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:55.614143 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:56.113587 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:56.614451 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:57.113619 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:57.613622 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:58.113547 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:58.614429 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:59.113617 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:59.613534 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:00.124126 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:00.124233 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:00.200982 1302865 cri.go:89] found id: ""
	I1213 14:57:00.201003 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.201011 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:00.201018 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:00.201100 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:00.237755 1302865 cri.go:89] found id: ""
	I1213 14:57:00.237770 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.237778 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:00.237783 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:00.237861 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:00.301679 1302865 cri.go:89] found id: ""
	I1213 14:57:00.301694 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.301702 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:00.301709 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:00.301778 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:00.347228 1302865 cri.go:89] found id: ""
	I1213 14:57:00.347243 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.347251 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:00.347256 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:00.347356 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:00.376454 1302865 cri.go:89] found id: ""
	I1213 14:57:00.376471 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.376479 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:00.376485 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:00.376555 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:00.408967 1302865 cri.go:89] found id: ""
	I1213 14:57:00.408982 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.408989 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:00.408995 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:00.409059 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:00.437494 1302865 cri.go:89] found id: ""
	I1213 14:57:00.437509 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.437516 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:00.437524 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:00.437534 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:00.493840 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:00.493860 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:00.511767 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:00.511785 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:00.579231 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:00.570332   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.571045   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.572740   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.573345   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.575117   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:00.570332   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.571045   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.572740   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.573345   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.575117   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:00.579242 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:00.579253 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:00.641446 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:00.641467 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:03.171486 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:03.181873 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:03.181935 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:03.212211 1302865 cri.go:89] found id: ""
	I1213 14:57:03.212226 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.212232 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:03.212244 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:03.212304 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:03.237934 1302865 cri.go:89] found id: ""
	I1213 14:57:03.237949 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.237957 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:03.237962 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:03.238034 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:03.263822 1302865 cri.go:89] found id: ""
	I1213 14:57:03.263836 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.263843 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:03.263848 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:03.263910 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:03.289876 1302865 cri.go:89] found id: ""
	I1213 14:57:03.289890 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.289898 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:03.289902 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:03.289965 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:03.317957 1302865 cri.go:89] found id: ""
	I1213 14:57:03.317972 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.317979 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:03.318000 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:03.318060 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:03.346780 1302865 cri.go:89] found id: ""
	I1213 14:57:03.346793 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.346800 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:03.346805 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:03.346864 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:03.371472 1302865 cri.go:89] found id: ""
	I1213 14:57:03.371485 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.371493 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:03.371501 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:03.371512 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:03.399569 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:03.399588 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:03.454307 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:03.454327 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:03.472933 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:03.472951 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:03.538528 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:03.529465   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.530241   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.532007   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.532517   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.534246   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:03.529465   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.530241   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.532007   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.532517   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.534246   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:03.538539 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:03.538550 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:06.101738 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:06.112716 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:06.112778 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:06.139740 1302865 cri.go:89] found id: ""
	I1213 14:57:06.139753 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.139759 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:06.139770 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:06.139831 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:06.169906 1302865 cri.go:89] found id: ""
	I1213 14:57:06.169920 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.169927 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:06.169932 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:06.169993 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:06.194468 1302865 cri.go:89] found id: ""
	I1213 14:57:06.194482 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.194492 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:06.194497 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:06.194556 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:06.219346 1302865 cri.go:89] found id: ""
	I1213 14:57:06.219360 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.219367 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:06.219372 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:06.219466 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:06.244844 1302865 cri.go:89] found id: ""
	I1213 14:57:06.244858 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.244865 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:06.244870 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:06.244928 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:06.269412 1302865 cri.go:89] found id: ""
	I1213 14:57:06.269425 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.269433 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:06.269438 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:06.269498 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:06.293947 1302865 cri.go:89] found id: ""
	I1213 14:57:06.293960 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.293967 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:06.293975 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:06.293991 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:06.320232 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:06.320249 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:06.375210 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:06.375229 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:06.392065 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:06.392081 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:06.457910 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:06.449858   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.450478   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.452122   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.452601   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.454099   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:06.449858   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.450478   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.452122   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.452601   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.454099   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:06.457920 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:06.457931 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:09.020376 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:09.030584 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:09.030644 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:09.057441 1302865 cri.go:89] found id: ""
	I1213 14:57:09.057455 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.057462 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:09.057467 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:09.057529 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:09.091252 1302865 cri.go:89] found id: ""
	I1213 14:57:09.091266 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.091273 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:09.091277 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:09.091357 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:09.133954 1302865 cri.go:89] found id: ""
	I1213 14:57:09.133969 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.133976 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:09.133981 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:09.134041 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:09.161351 1302865 cri.go:89] found id: ""
	I1213 14:57:09.161365 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.161372 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:09.161386 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:09.161449 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:09.186493 1302865 cri.go:89] found id: ""
	I1213 14:57:09.186507 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.186515 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:09.186519 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:09.186579 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:09.210752 1302865 cri.go:89] found id: ""
	I1213 14:57:09.210766 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.210774 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:09.210779 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:09.210841 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:09.235216 1302865 cri.go:89] found id: ""
	I1213 14:57:09.235231 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.235238 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:09.235246 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:09.235256 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:09.290010 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:09.290030 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:09.307105 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:09.307122 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:09.373837 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:09.365574   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.366312   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.368046   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.368412   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.369910   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:09.365574   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.366312   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.368046   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.368412   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.369910   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:09.373848 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:09.373862 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:09.435916 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:09.435937 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:11.968947 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:11.978917 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:11.978976 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:12.003367 1302865 cri.go:89] found id: ""
	I1213 14:57:12.003387 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.003395 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:12.003401 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:12.003472 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:12.030862 1302865 cri.go:89] found id: ""
	I1213 14:57:12.030876 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.030883 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:12.030889 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:12.030947 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:12.055991 1302865 cri.go:89] found id: ""
	I1213 14:57:12.056006 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.056014 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:12.056020 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:12.056078 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:12.088685 1302865 cri.go:89] found id: ""
	I1213 14:57:12.088699 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.088706 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:12.088711 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:12.088771 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:12.119175 1302865 cri.go:89] found id: ""
	I1213 14:57:12.119199 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.119206 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:12.119212 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:12.119276 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:12.148170 1302865 cri.go:89] found id: ""
	I1213 14:57:12.148192 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.148199 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:12.148204 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:12.148276 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:12.173907 1302865 cri.go:89] found id: ""
	I1213 14:57:12.173929 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.173936 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:12.173944 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:12.173955 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:12.230024 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:12.230044 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:12.249202 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:12.249219 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:12.317257 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:12.307463   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.308131   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.309930   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.310545   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.312207   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:12.307463   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.308131   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.309930   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.310545   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.312207   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:12.317267 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:12.317284 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:12.384433 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:12.384455 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:14.917091 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:14.927788 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:14.927850 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:14.953190 1302865 cri.go:89] found id: ""
	I1213 14:57:14.953205 1302865 logs.go:282] 0 containers: []
	W1213 14:57:14.953212 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:14.953226 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:14.953289 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:14.978043 1302865 cri.go:89] found id: ""
	I1213 14:57:14.978068 1302865 logs.go:282] 0 containers: []
	W1213 14:57:14.978075 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:14.978081 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:14.978175 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:15.004731 1302865 cri.go:89] found id: ""
	I1213 14:57:15.004749 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.004756 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:15.004761 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:15.004846 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:15.048669 1302865 cri.go:89] found id: ""
	I1213 14:57:15.048685 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.048693 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:15.048698 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:15.048777 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:15.085505 1302865 cri.go:89] found id: ""
	I1213 14:57:15.085520 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.085528 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:15.085534 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:15.085607 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:15.124753 1302865 cri.go:89] found id: ""
	I1213 14:57:15.124776 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.124784 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:15.124790 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:15.124860 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:15.168668 1302865 cri.go:89] found id: ""
	I1213 14:57:15.168682 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.168690 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:15.168698 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:15.168720 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:15.236878 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:15.228546   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.229113   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.230853   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.231344   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.232993   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:15.228546   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.229113   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.230853   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.231344   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.232993   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:15.236889 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:15.236899 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:15.299331 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:15.299361 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:15.331125 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:15.331142 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:15.391451 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:15.391478 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:17.910179 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:17.920514 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:17.920590 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:17.945066 1302865 cri.go:89] found id: ""
	I1213 14:57:17.945081 1302865 logs.go:282] 0 containers: []
	W1213 14:57:17.945088 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:17.945094 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:17.945152 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:17.972856 1302865 cri.go:89] found id: ""
	I1213 14:57:17.972870 1302865 logs.go:282] 0 containers: []
	W1213 14:57:17.972878 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:17.972882 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:17.972944 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:17.999205 1302865 cri.go:89] found id: ""
	I1213 14:57:17.999219 1302865 logs.go:282] 0 containers: []
	W1213 14:57:17.999226 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:17.999231 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:17.999288 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:18.034164 1302865 cri.go:89] found id: ""
	I1213 14:57:18.034178 1302865 logs.go:282] 0 containers: []
	W1213 14:57:18.034185 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:18.034190 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:18.034255 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:18.060346 1302865 cri.go:89] found id: ""
	I1213 14:57:18.060361 1302865 logs.go:282] 0 containers: []
	W1213 14:57:18.060368 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:18.060373 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:18.060438 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:18.089688 1302865 cri.go:89] found id: ""
	I1213 14:57:18.089702 1302865 logs.go:282] 0 containers: []
	W1213 14:57:18.089710 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:18.089718 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:18.089780 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:18.128859 1302865 cri.go:89] found id: ""
	I1213 14:57:18.128874 1302865 logs.go:282] 0 containers: []
	W1213 14:57:18.128881 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:18.128889 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:18.128899 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:18.188820 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:18.188842 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:18.206229 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:18.206247 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:18.277989 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:18.269084   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.269641   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.271489   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.272150   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.273959   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:18.269084   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.269641   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.271489   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.272150   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.273959   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:18.277999 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:18.278009 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:18.339945 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:18.339965 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:20.869114 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:20.879800 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:20.879866 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:20.905760 1302865 cri.go:89] found id: ""
	I1213 14:57:20.905774 1302865 logs.go:282] 0 containers: []
	W1213 14:57:20.905781 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:20.905786 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:20.905849 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:20.931353 1302865 cri.go:89] found id: ""
	I1213 14:57:20.931367 1302865 logs.go:282] 0 containers: []
	W1213 14:57:20.931374 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:20.931379 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:20.931445 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:20.956682 1302865 cri.go:89] found id: ""
	I1213 14:57:20.956696 1302865 logs.go:282] 0 containers: []
	W1213 14:57:20.956704 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:20.956709 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:20.956769 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:20.980824 1302865 cri.go:89] found id: ""
	I1213 14:57:20.980838 1302865 logs.go:282] 0 containers: []
	W1213 14:57:20.980845 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:20.980850 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:20.980909 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:21.008951 1302865 cri.go:89] found id: ""
	I1213 14:57:21.008974 1302865 logs.go:282] 0 containers: []
	W1213 14:57:21.008982 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:21.008987 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:21.009058 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:21.038190 1302865 cri.go:89] found id: ""
	I1213 14:57:21.038204 1302865 logs.go:282] 0 containers: []
	W1213 14:57:21.038211 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:21.038216 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:21.038277 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:21.063608 1302865 cri.go:89] found id: ""
	I1213 14:57:21.063622 1302865 logs.go:282] 0 containers: []
	W1213 14:57:21.063630 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:21.063638 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:21.063648 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:21.132089 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:21.132109 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:21.171889 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:21.171908 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:21.230786 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:21.230806 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:21.247733 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:21.247753 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:21.318785 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:21.309659   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.310476   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.312082   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.312759   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.314434   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:21.309659   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.310476   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.312082   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.312759   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.314434   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:23.819828 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:23.830541 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:23.830604 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:23.853826 1302865 cri.go:89] found id: ""
	I1213 14:57:23.853840 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.853856 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:23.853862 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:23.853933 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:23.879146 1302865 cri.go:89] found id: ""
	I1213 14:57:23.879169 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.879177 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:23.879182 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:23.879253 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:23.904357 1302865 cri.go:89] found id: ""
	I1213 14:57:23.904371 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.904379 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:23.904384 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:23.904450 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:23.929036 1302865 cri.go:89] found id: ""
	I1213 14:57:23.929050 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.929058 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:23.929063 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:23.929124 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:23.954748 1302865 cri.go:89] found id: ""
	I1213 14:57:23.954762 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.954779 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:23.954785 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:23.954854 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:23.979661 1302865 cri.go:89] found id: ""
	I1213 14:57:23.979676 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.979683 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:23.979687 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:23.979750 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:24.009902 1302865 cri.go:89] found id: ""
	I1213 14:57:24.009918 1302865 logs.go:282] 0 containers: []
	W1213 14:57:24.009925 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:24.009935 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:24.009946 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:24.079943 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:24.070877   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.071501   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.073325   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.073881   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.075497   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:24.070877   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.071501   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.073325   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.073881   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.075497   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:24.079954 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:24.079966 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:24.144015 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:24.144037 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:24.174637 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:24.174654 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:24.235392 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:24.235413 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:26.753238 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:26.763339 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:26.763404 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:26.788474 1302865 cri.go:89] found id: ""
	I1213 14:57:26.788487 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.788494 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:26.788499 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:26.788559 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:26.814440 1302865 cri.go:89] found id: ""
	I1213 14:57:26.814454 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.814461 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:26.814466 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:26.814524 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:26.841795 1302865 cri.go:89] found id: ""
	I1213 14:57:26.841809 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.841816 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:26.841821 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:26.841880 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:26.869399 1302865 cri.go:89] found id: ""
	I1213 14:57:26.869413 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.869420 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:26.869425 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:26.869482 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:26.892445 1302865 cri.go:89] found id: ""
	I1213 14:57:26.892459 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.892467 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:26.892472 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:26.892535 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:26.916537 1302865 cri.go:89] found id: ""
	I1213 14:57:26.916558 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.916565 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:26.916570 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:26.916639 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:26.940628 1302865 cri.go:89] found id: ""
	I1213 14:57:26.940650 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.940658 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:26.940671 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:26.940681 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:26.969808 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:26.969827 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:27.025191 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:27.025211 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:27.042465 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:27.042482 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:27.122593 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:27.114680   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.115527   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.117159   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.117448   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.118857   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:27.114680   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.115527   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.117159   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.117448   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.118857   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:27.122618 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:27.122628 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:29.693191 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:29.703585 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:29.703652 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:29.732578 1302865 cri.go:89] found id: ""
	I1213 14:57:29.732593 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.732614 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:29.732621 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:29.732686 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:29.757517 1302865 cri.go:89] found id: ""
	I1213 14:57:29.757531 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.757538 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:29.757543 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:29.757604 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:29.785456 1302865 cri.go:89] found id: ""
	I1213 14:57:29.785470 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.785476 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:29.785482 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:29.785544 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:29.809997 1302865 cri.go:89] found id: ""
	I1213 14:57:29.810011 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.810018 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:29.810023 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:29.810085 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:29.834277 1302865 cri.go:89] found id: ""
	I1213 14:57:29.834292 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.834299 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:29.834304 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:29.834366 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:29.858653 1302865 cri.go:89] found id: ""
	I1213 14:57:29.858667 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.858675 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:29.858686 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:29.858749 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:29.884435 1302865 cri.go:89] found id: ""
	I1213 14:57:29.884450 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.884456 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:29.884464 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:29.884477 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:29.911338 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:29.911356 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:29.966819 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:29.966838 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:29.985125 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:29.985141 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:30.070789 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:30.059761   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.060525   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.063144   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.063902   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.066109   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:30.059761   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.060525   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.063144   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.063902   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.066109   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:30.070800 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:30.070811 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:32.643832 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:32.654329 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:32.654399 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:32.687375 1302865 cri.go:89] found id: ""
	I1213 14:57:32.687390 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.687398 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:32.687403 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:32.687465 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:32.712437 1302865 cri.go:89] found id: ""
	I1213 14:57:32.712452 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.712460 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:32.712465 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:32.712529 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:32.738220 1302865 cri.go:89] found id: ""
	I1213 14:57:32.738234 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.738241 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:32.738247 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:32.738310 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:32.763211 1302865 cri.go:89] found id: ""
	I1213 14:57:32.763226 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.763233 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:32.763238 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:32.763299 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:32.789049 1302865 cri.go:89] found id: ""
	I1213 14:57:32.789063 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.789071 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:32.789077 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:32.789141 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:32.815194 1302865 cri.go:89] found id: ""
	I1213 14:57:32.815208 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.815215 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:32.815221 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:32.815284 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:32.840629 1302865 cri.go:89] found id: ""
	I1213 14:57:32.840646 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.840653 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:32.840661 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:32.840672 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:32.868556 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:32.868574 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:32.923451 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:32.923472 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:32.940492 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:32.940508 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:33.014646 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:33.000281   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.001080   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.003301   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.004000   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.006154   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:33.000281   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.001080   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.003301   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.004000   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.006154   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:33.014656 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:33.014680 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:35.576582 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:35.586876 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:35.586939 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:35.612619 1302865 cri.go:89] found id: ""
	I1213 14:57:35.612634 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.612641 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:35.612646 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:35.612714 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:35.637275 1302865 cri.go:89] found id: ""
	I1213 14:57:35.637289 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.637296 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:35.637302 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:35.637363 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:35.661936 1302865 cri.go:89] found id: ""
	I1213 14:57:35.661950 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.661957 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:35.661962 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:35.662035 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:35.691702 1302865 cri.go:89] found id: ""
	I1213 14:57:35.691716 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.691722 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:35.691727 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:35.691789 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:35.719594 1302865 cri.go:89] found id: ""
	I1213 14:57:35.719608 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.719614 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:35.719619 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:35.719685 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:35.747602 1302865 cri.go:89] found id: ""
	I1213 14:57:35.747617 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.747624 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:35.747629 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:35.747690 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:35.772489 1302865 cri.go:89] found id: ""
	I1213 14:57:35.772503 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.772510 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:35.772517 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:35.772534 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:35.801457 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:35.801474 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:35.859688 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:35.859708 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:35.877069 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:35.877087 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:35.942565 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:35.934502   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.935224   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.936888   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.937197   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.938690   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:35.934502   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.935224   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.936888   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.937197   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.938690   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:35.942576 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:35.942595 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:38.506862 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:38.517509 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:38.517575 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:38.542481 1302865 cri.go:89] found id: ""
	I1213 14:57:38.542496 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.542512 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:38.542517 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:38.542586 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:38.567177 1302865 cri.go:89] found id: ""
	I1213 14:57:38.567191 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.567198 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:38.567202 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:38.567264 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:38.591952 1302865 cri.go:89] found id: ""
	I1213 14:57:38.591967 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.591974 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:38.591979 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:38.592036 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:38.615589 1302865 cri.go:89] found id: ""
	I1213 14:57:38.615604 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.615619 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:38.615625 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:38.615697 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:38.641025 1302865 cri.go:89] found id: ""
	I1213 14:57:38.641039 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.641046 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:38.641051 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:38.641115 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:38.666245 1302865 cri.go:89] found id: ""
	I1213 14:57:38.666259 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.666276 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:38.666282 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:38.666355 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:38.691177 1302865 cri.go:89] found id: ""
	I1213 14:57:38.691191 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.691198 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:38.691206 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:38.691217 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:38.748984 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:38.749004 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:38.765774 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:38.765791 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:38.833656 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:38.825292   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.825956   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.827650   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.828246   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.829830   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:38.825292   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.825956   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.827650   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.828246   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.829830   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:38.833672 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:38.833683 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:38.895503 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:38.895524 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:41.424760 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:41.435082 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:41.435154 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:41.460250 1302865 cri.go:89] found id: ""
	I1213 14:57:41.460265 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.460273 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:41.460278 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:41.460338 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:41.490003 1302865 cri.go:89] found id: ""
	I1213 14:57:41.490017 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.490024 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:41.490029 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:41.490094 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:41.515086 1302865 cri.go:89] found id: ""
	I1213 14:57:41.515100 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.515107 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:41.515112 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:41.515173 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:41.540169 1302865 cri.go:89] found id: ""
	I1213 14:57:41.540183 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.540205 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:41.540211 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:41.540279 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:41.564345 1302865 cri.go:89] found id: ""
	I1213 14:57:41.564358 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.564365 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:41.564370 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:41.564429 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:41.589001 1302865 cri.go:89] found id: ""
	I1213 14:57:41.589015 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.589022 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:41.589027 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:41.589091 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:41.617434 1302865 cri.go:89] found id: ""
	I1213 14:57:41.617447 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.617455 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:41.617462 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:41.617471 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:41.683384 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:41.683411 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:41.711592 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:41.711611 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:41.769286 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:41.769305 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:41.786199 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:41.786219 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:41.854485 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:41.846163   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.846886   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.848432   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.848940   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.850514   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:41.846163   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.846886   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.848432   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.848940   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.850514   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:44.355606 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:44.369969 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:44.370032 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:44.401460 1302865 cri.go:89] found id: ""
	I1213 14:57:44.401474 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.401481 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:44.401486 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:44.401548 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:44.431513 1302865 cri.go:89] found id: ""
	I1213 14:57:44.431527 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.431534 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:44.431539 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:44.431600 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:44.457242 1302865 cri.go:89] found id: ""
	I1213 14:57:44.457256 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.457263 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:44.457268 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:44.457329 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:44.482224 1302865 cri.go:89] found id: ""
	I1213 14:57:44.482238 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.482245 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:44.482250 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:44.482313 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:44.509856 1302865 cri.go:89] found id: ""
	I1213 14:57:44.509871 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.509878 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:44.509884 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:44.509950 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:44.533977 1302865 cri.go:89] found id: ""
	I1213 14:57:44.533992 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.533999 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:44.534005 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:44.534069 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:44.562015 1302865 cri.go:89] found id: ""
	I1213 14:57:44.562029 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.562036 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:44.562044 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:44.562055 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:44.629999 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:44.621407   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.622108   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.623865   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.624500   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.626099   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:44.621407   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.622108   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.623865   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.624500   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.626099   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:44.630009 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:44.630020 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:44.697021 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:44.697042 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:44.725319 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:44.725336 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:44.783033 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:44.783053 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:47.300684 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:47.311369 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:47.311431 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:47.343773 1302865 cri.go:89] found id: ""
	I1213 14:57:47.343787 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.343794 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:47.343800 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:47.343864 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:47.373867 1302865 cri.go:89] found id: ""
	I1213 14:57:47.373881 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.373888 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:47.373893 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:47.373950 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:47.409488 1302865 cri.go:89] found id: ""
	I1213 14:57:47.409503 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.409510 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:47.409515 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:47.409576 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:47.436144 1302865 cri.go:89] found id: ""
	I1213 14:57:47.436160 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.436166 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:47.436172 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:47.436231 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:47.459642 1302865 cri.go:89] found id: ""
	I1213 14:57:47.459656 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.459664 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:47.459669 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:47.459728 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:47.488525 1302865 cri.go:89] found id: ""
	I1213 14:57:47.488539 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.488546 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:47.488589 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:47.488660 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:47.513277 1302865 cri.go:89] found id: ""
	I1213 14:57:47.513304 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.513312 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:47.513320 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:47.513333 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:47.569182 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:47.569201 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:47.586016 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:47.586033 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:47.657399 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:47.648527   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.649441   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.651289   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.651877   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.653400   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:47.648527   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.649441   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.651289   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.651877   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.653400   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:47.657410 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:47.657421 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:47.719756 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:47.719776 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:50.250366 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:50.261360 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:50.261430 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:50.285575 1302865 cri.go:89] found id: ""
	I1213 14:57:50.285588 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.285595 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:50.285601 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:50.285657 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:50.313925 1302865 cri.go:89] found id: ""
	I1213 14:57:50.313939 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.313946 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:50.313951 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:50.314025 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:50.350634 1302865 cri.go:89] found id: ""
	I1213 14:57:50.350653 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.350660 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:50.350665 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:50.350725 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:50.377901 1302865 cri.go:89] found id: ""
	I1213 14:57:50.377915 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.377922 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:50.377927 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:50.377987 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:50.408528 1302865 cri.go:89] found id: ""
	I1213 14:57:50.408550 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.408557 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:50.408562 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:50.408637 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:50.434189 1302865 cri.go:89] found id: ""
	I1213 14:57:50.434203 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.434212 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:50.434217 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:50.434275 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:50.459353 1302865 cri.go:89] found id: ""
	I1213 14:57:50.459367 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.459373 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:50.459381 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:50.459391 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:50.515565 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:50.515585 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:50.532866 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:50.532883 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:50.599094 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:50.590849   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.591681   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.593186   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.593701   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.595193   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:50.590849   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.591681   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.593186   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.593701   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.595193   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:50.599104 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:50.599115 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:50.663140 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:50.663159 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:53.200108 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:53.210621 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:53.210684 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:53.236457 1302865 cri.go:89] found id: ""
	I1213 14:57:53.236471 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.236478 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:53.236483 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:53.236545 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:53.269649 1302865 cri.go:89] found id: ""
	I1213 14:57:53.269664 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.269670 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:53.269677 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:53.269738 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:53.293759 1302865 cri.go:89] found id: ""
	I1213 14:57:53.293774 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.293781 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:53.293786 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:53.293846 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:53.318675 1302865 cri.go:89] found id: ""
	I1213 14:57:53.318690 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.318696 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:53.318701 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:53.318765 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:53.353544 1302865 cri.go:89] found id: ""
	I1213 14:57:53.353558 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.353564 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:53.353569 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:53.353630 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:53.381535 1302865 cri.go:89] found id: ""
	I1213 14:57:53.381549 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.381565 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:53.381571 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:53.381641 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:53.408473 1302865 cri.go:89] found id: ""
	I1213 14:57:53.408487 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.408494 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:53.408502 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:53.408514 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:53.463646 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:53.463670 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:53.480500 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:53.480518 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:53.545969 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:53.538062   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.538490   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.540123   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.540597   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.542043   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:53.538062   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.538490   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.540123   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.540597   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.542043   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:53.545979 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:53.545991 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:53.607729 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:53.607750 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:56.139407 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:56.150264 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:56.150335 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:56.175852 1302865 cri.go:89] found id: ""
	I1213 14:57:56.175866 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.175873 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:56.175878 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:56.175942 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:56.202887 1302865 cri.go:89] found id: ""
	I1213 14:57:56.202901 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.202908 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:56.202921 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:56.202981 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:56.229038 1302865 cri.go:89] found id: ""
	I1213 14:57:56.229053 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.229060 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:56.229065 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:56.229125 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:56.253081 1302865 cri.go:89] found id: ""
	I1213 14:57:56.253096 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.253103 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:56.253108 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:56.253172 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:56.277822 1302865 cri.go:89] found id: ""
	I1213 14:57:56.277836 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.277843 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:56.277849 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:56.277910 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:56.302419 1302865 cri.go:89] found id: ""
	I1213 14:57:56.302435 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.302442 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:56.302447 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:56.302508 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:56.327036 1302865 cri.go:89] found id: ""
	I1213 14:57:56.327050 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.327057 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:56.327066 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:56.327078 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:56.353968 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:56.353986 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:56.426915 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:56.418628   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.419173   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.420794   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.421363   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.422912   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:56.418628   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.419173   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.420794   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.421363   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.422912   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:56.426926 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:56.426943 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:56.488491 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:56.488513 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:56.516737 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:56.516753 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:59.077330 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:59.087745 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:59.087809 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:59.113689 1302865 cri.go:89] found id: ""
	I1213 14:57:59.113703 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.113710 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:59.113715 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:59.113774 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:59.138884 1302865 cri.go:89] found id: ""
	I1213 14:57:59.138898 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.138905 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:59.138911 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:59.138976 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:59.164226 1302865 cri.go:89] found id: ""
	I1213 14:57:59.164240 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.164246 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:59.164254 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:59.164312 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:59.189753 1302865 cri.go:89] found id: ""
	I1213 14:57:59.189767 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.189774 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:59.189779 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:59.189840 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:59.219066 1302865 cri.go:89] found id: ""
	I1213 14:57:59.219080 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.219086 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:59.219092 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:59.219152 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:59.243456 1302865 cri.go:89] found id: ""
	I1213 14:57:59.243470 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.243477 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:59.243482 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:59.243544 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:59.267676 1302865 cri.go:89] found id: ""
	I1213 14:57:59.267692 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.267699 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:59.267707 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:59.267719 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:59.284600 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:59.284617 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:59.356184 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:59.346354   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.347267   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.349163   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.349876   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.351596   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:59.346354   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.347267   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.349163   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.349876   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.351596   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:59.356202 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:59.356215 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:59.427513 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:59.427535 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:59.459203 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:59.459220 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:02.016233 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:02.027182 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:02.027246 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:02.053453 1302865 cri.go:89] found id: ""
	I1213 14:58:02.053467 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.053475 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:02.053480 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:02.053543 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:02.081288 1302865 cri.go:89] found id: ""
	I1213 14:58:02.081303 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.081310 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:02.081315 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:02.081377 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:02.106556 1302865 cri.go:89] found id: ""
	I1213 14:58:02.106572 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.106579 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:02.106585 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:02.106645 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:02.131201 1302865 cri.go:89] found id: ""
	I1213 14:58:02.131215 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.131221 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:02.131226 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:02.131286 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:02.156170 1302865 cri.go:89] found id: ""
	I1213 14:58:02.156194 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.156202 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:02.156207 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:02.156275 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:02.185059 1302865 cri.go:89] found id: ""
	I1213 14:58:02.185073 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.185080 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:02.185086 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:02.185153 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:02.209854 1302865 cri.go:89] found id: ""
	I1213 14:58:02.209870 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.209884 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:02.209893 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:02.209903 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:02.279934 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:02.270936   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.271416   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.273300   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.273902   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.275651   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:02.270936   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.271416   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.273300   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.273902   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.275651   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:02.279958 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:02.279970 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:02.341869 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:02.341888 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:02.370761 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:02.370783 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:02.431851 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:02.431869 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:04.950137 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:04.960995 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:04.961059 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:04.986243 1302865 cri.go:89] found id: ""
	I1213 14:58:04.986257 1302865 logs.go:282] 0 containers: []
	W1213 14:58:04.986264 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:04.986269 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:04.986329 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:05.016170 1302865 cri.go:89] found id: ""
	I1213 14:58:05.016192 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.016200 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:05.016206 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:05.016270 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:05.042103 1302865 cri.go:89] found id: ""
	I1213 14:58:05.042117 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.042124 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:05.042129 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:05.042188 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:05.066050 1302865 cri.go:89] found id: ""
	I1213 14:58:05.066065 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.066071 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:05.066077 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:05.066141 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:05.091600 1302865 cri.go:89] found id: ""
	I1213 14:58:05.091615 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.091623 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:05.091634 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:05.091698 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:05.117406 1302865 cri.go:89] found id: ""
	I1213 14:58:05.117420 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.117427 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:05.117432 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:05.117491 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:05.143774 1302865 cri.go:89] found id: ""
	I1213 14:58:05.143788 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.143794 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:05.143802 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:05.143823 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:05.198717 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:05.198736 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:05.216110 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:05.216127 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:05.281771 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:05.273944   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.274471   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.276069   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.276389   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.277907   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:05.273944   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.274471   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.276069   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.276389   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.277907   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:05.281792 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:05.281804 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:05.344051 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:05.344070 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:07.872032 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:07.883862 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:07.883925 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:07.908603 1302865 cri.go:89] found id: ""
	I1213 14:58:07.908616 1302865 logs.go:282] 0 containers: []
	W1213 14:58:07.908623 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:07.908628 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:07.908696 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:07.932609 1302865 cri.go:89] found id: ""
	I1213 14:58:07.932624 1302865 logs.go:282] 0 containers: []
	W1213 14:58:07.932631 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:07.932636 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:07.932729 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:07.957476 1302865 cri.go:89] found id: ""
	I1213 14:58:07.957490 1302865 logs.go:282] 0 containers: []
	W1213 14:58:07.957497 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:07.957502 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:07.957561 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:07.983994 1302865 cri.go:89] found id: ""
	I1213 14:58:07.984014 1302865 logs.go:282] 0 containers: []
	W1213 14:58:07.984022 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:07.984027 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:07.984090 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:08.016758 1302865 cri.go:89] found id: ""
	I1213 14:58:08.016772 1302865 logs.go:282] 0 containers: []
	W1213 14:58:08.016779 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:08.016784 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:08.016850 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:08.048311 1302865 cri.go:89] found id: ""
	I1213 14:58:08.048326 1302865 logs.go:282] 0 containers: []
	W1213 14:58:08.048333 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:08.048338 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:08.048404 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:08.074196 1302865 cri.go:89] found id: ""
	I1213 14:58:08.074211 1302865 logs.go:282] 0 containers: []
	W1213 14:58:08.074219 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:08.074226 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:08.074237 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:08.139046 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:08.139073 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:08.167121 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:08.167141 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:08.222634 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:08.222664 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:08.240309 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:08.240325 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:08.310479 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:08.301605   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.302332   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.304082   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.304629   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.306246   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:08.301605   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.302332   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.304082   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.304629   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.306246   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:10.810723 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:10.820844 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:10.820953 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:10.865862 1302865 cri.go:89] found id: ""
	I1213 14:58:10.865875 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.865882 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:10.865888 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:10.865959 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:10.896607 1302865 cri.go:89] found id: ""
	I1213 14:58:10.896621 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.896628 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:10.896634 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:10.896710 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:10.924657 1302865 cri.go:89] found id: ""
	I1213 14:58:10.924671 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.924678 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:10.924684 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:10.924748 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:10.949300 1302865 cri.go:89] found id: ""
	I1213 14:58:10.949314 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.949321 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:10.949326 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:10.949388 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:10.973896 1302865 cri.go:89] found id: ""
	I1213 14:58:10.973910 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.973917 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:10.973922 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:10.973983 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:10.998200 1302865 cri.go:89] found id: ""
	I1213 14:58:10.998214 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.998231 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:10.998237 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:10.998295 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:11.024841 1302865 cri.go:89] found id: ""
	I1213 14:58:11.024856 1302865 logs.go:282] 0 containers: []
	W1213 14:58:11.024863 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:11.024871 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:11.024886 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:11.092350 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:11.083613   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.084237   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.086051   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.086531   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.088132   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:11.083613   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.084237   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.086051   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.086531   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.088132   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:11.092361 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:11.092372 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:11.154591 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:11.154612 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:11.187883 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:11.187899 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:11.248594 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:11.248613 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:13.766160 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:13.776057 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:13.776115 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:13.800863 1302865 cri.go:89] found id: ""
	I1213 14:58:13.800877 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.800884 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:13.800889 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:13.800990 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:13.825283 1302865 cri.go:89] found id: ""
	I1213 14:58:13.825298 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.825305 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:13.825309 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:13.825368 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:13.857732 1302865 cri.go:89] found id: ""
	I1213 14:58:13.857746 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.857753 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:13.857758 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:13.857816 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:13.891546 1302865 cri.go:89] found id: ""
	I1213 14:58:13.891560 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.891566 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:13.891572 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:13.891629 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:13.918725 1302865 cri.go:89] found id: ""
	I1213 14:58:13.918738 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.918746 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:13.918750 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:13.918810 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:13.942434 1302865 cri.go:89] found id: ""
	I1213 14:58:13.942448 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.942455 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:13.942460 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:13.942521 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:13.966591 1302865 cri.go:89] found id: ""
	I1213 14:58:13.966606 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.966613 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:13.966621 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:13.966632 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:13.983200 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:13.983217 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:14.050601 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:14.041722   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.042310   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.044028   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.044598   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.046179   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:14.041722   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.042310   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.044028   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.044598   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.046179   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:14.050610 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:14.050622 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:14.111742 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:14.111761 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:14.139171 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:14.139189 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:16.694504 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:16.704690 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:16.704753 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:16.730421 1302865 cri.go:89] found id: ""
	I1213 14:58:16.730436 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.730444 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:16.730449 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:16.730510 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:16.755642 1302865 cri.go:89] found id: ""
	I1213 14:58:16.755657 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.755676 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:16.755681 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:16.755741 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:16.780583 1302865 cri.go:89] found id: ""
	I1213 14:58:16.780597 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.780604 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:16.780610 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:16.780685 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:16.809520 1302865 cri.go:89] found id: ""
	I1213 14:58:16.809534 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.809542 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:16.809547 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:16.809606 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:16.845772 1302865 cri.go:89] found id: ""
	I1213 14:58:16.845787 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.845794 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:16.845799 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:16.845867 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:16.871303 1302865 cri.go:89] found id: ""
	I1213 14:58:16.871338 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.871345 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:16.871350 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:16.871411 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:16.897846 1302865 cri.go:89] found id: ""
	I1213 14:58:16.897859 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.897866 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:16.897875 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:16.897885 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:16.959059 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:16.959079 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:16.996406 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:16.996421 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:17.052568 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:17.052589 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:17.069678 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:17.069696 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:17.133677 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:17.125422   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.125964   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.127591   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.128083   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.129662   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:17.125422   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.125964   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.127591   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.128083   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.129662   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:19.633920 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:19.644044 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:19.644109 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:19.668667 1302865 cri.go:89] found id: ""
	I1213 14:58:19.668681 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.668688 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:19.668693 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:19.668759 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:19.693045 1302865 cri.go:89] found id: ""
	I1213 14:58:19.693059 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.693066 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:19.693071 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:19.693134 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:19.717622 1302865 cri.go:89] found id: ""
	I1213 14:58:19.717637 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.717643 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:19.717649 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:19.717708 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:19.742933 1302865 cri.go:89] found id: ""
	I1213 14:58:19.742948 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.742954 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:19.742962 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:19.743024 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:19.767055 1302865 cri.go:89] found id: ""
	I1213 14:58:19.767069 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.767076 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:19.767081 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:19.767139 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:19.793086 1302865 cri.go:89] found id: ""
	I1213 14:58:19.793100 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.793107 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:19.793112 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:19.793172 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:19.816884 1302865 cri.go:89] found id: ""
	I1213 14:58:19.816898 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.816905 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:19.816912 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:19.816927 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:19.833746 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:19.833763 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:19.912181 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:19.904591   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.905016   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.906518   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.906824   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.908282   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:19.904591   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.905016   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.906518   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.906824   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.908282   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:19.912191 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:19.912202 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:19.973611 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:19.973631 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:20.005249 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:20.005269 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:22.571015 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:22.581487 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:22.581553 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:22.606385 1302865 cri.go:89] found id: ""
	I1213 14:58:22.606399 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.606405 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:22.606411 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:22.606466 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:22.631290 1302865 cri.go:89] found id: ""
	I1213 14:58:22.631304 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.631330 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:22.631341 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:22.631402 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:22.656039 1302865 cri.go:89] found id: ""
	I1213 14:58:22.656053 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.656059 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:22.656064 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:22.656123 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:22.680255 1302865 cri.go:89] found id: ""
	I1213 14:58:22.680268 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.680275 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:22.680281 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:22.680339 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:22.705412 1302865 cri.go:89] found id: ""
	I1213 14:58:22.705426 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.705434 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:22.705439 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:22.705501 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:22.729869 1302865 cri.go:89] found id: ""
	I1213 14:58:22.729885 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.729891 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:22.729897 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:22.729961 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:22.757980 1302865 cri.go:89] found id: ""
	I1213 14:58:22.757994 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.758001 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:22.758009 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:22.758022 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:22.774416 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:22.774433 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:22.850017 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:22.837138   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.837894   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.843564   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.844183   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.845796   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:22.837138   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.837894   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.843564   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.844183   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.845796   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:22.850034 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:22.850045 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:22.916305 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:22.916327 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:22.946422 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:22.946438 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:25.504766 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:25.515062 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:25.515129 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:25.539801 1302865 cri.go:89] found id: ""
	I1213 14:58:25.539815 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.539822 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:25.539827 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:25.539888 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:25.564134 1302865 cri.go:89] found id: ""
	I1213 14:58:25.564148 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.564155 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:25.564159 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:25.564218 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:25.588150 1302865 cri.go:89] found id: ""
	I1213 14:58:25.588165 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.588173 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:25.588178 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:25.588239 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:25.613567 1302865 cri.go:89] found id: ""
	I1213 14:58:25.613581 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.613588 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:25.613593 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:25.613659 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:25.643274 1302865 cri.go:89] found id: ""
	I1213 14:58:25.643290 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.643297 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:25.643303 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:25.643388 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:25.668136 1302865 cri.go:89] found id: ""
	I1213 14:58:25.668150 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.668157 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:25.668162 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:25.668223 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:25.693114 1302865 cri.go:89] found id: ""
	I1213 14:58:25.693128 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.693135 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:25.693143 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:25.693152 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:25.751087 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:25.751106 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:25.768578 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:25.768598 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:25.842306 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:25.826336   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.827040   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.829047   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.830610   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.833626   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:25.826336   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.827040   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.829047   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.830610   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.833626   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:25.842315 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:25.842325 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:25.934744 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:25.934771 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:28.468857 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:28.479478 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:28.479543 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:28.509273 1302865 cri.go:89] found id: ""
	I1213 14:58:28.509286 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.509293 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:28.509299 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:28.509360 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:28.535574 1302865 cri.go:89] found id: ""
	I1213 14:58:28.535588 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.535595 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:28.535601 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:28.535660 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:28.561231 1302865 cri.go:89] found id: ""
	I1213 14:58:28.561244 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.561251 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:28.561256 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:28.561316 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:28.586867 1302865 cri.go:89] found id: ""
	I1213 14:58:28.586881 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.586897 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:28.586903 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:28.586971 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:28.613781 1302865 cri.go:89] found id: ""
	I1213 14:58:28.613795 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.613802 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:28.613807 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:28.613865 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:28.639226 1302865 cri.go:89] found id: ""
	I1213 14:58:28.639247 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.639255 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:28.639260 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:28.639351 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:28.664957 1302865 cri.go:89] found id: ""
	I1213 14:58:28.664971 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.664977 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:28.664985 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:28.664995 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:28.681545 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:28.681562 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:28.746274 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:28.738221   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.738895   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.740429   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.740922   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.742410   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:28.738221   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.738895   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.740429   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.740922   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.742410   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:28.746286 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:28.746297 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:28.811866 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:28.811886 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:28.853916 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:28.853932 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:31.417796 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:31.427841 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:31.427906 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:31.454876 1302865 cri.go:89] found id: ""
	I1213 14:58:31.454890 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.454897 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:31.454903 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:31.454967 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:31.478745 1302865 cri.go:89] found id: ""
	I1213 14:58:31.478763 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.478770 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:31.478774 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:31.478834 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:31.504045 1302865 cri.go:89] found id: ""
	I1213 14:58:31.504059 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.504066 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:31.504071 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:31.504132 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:31.536667 1302865 cri.go:89] found id: ""
	I1213 14:58:31.536687 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.536694 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:31.536699 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:31.536759 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:31.561651 1302865 cri.go:89] found id: ""
	I1213 14:58:31.561665 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.561672 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:31.561679 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:31.561740 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:31.590467 1302865 cri.go:89] found id: ""
	I1213 14:58:31.590487 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.590494 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:31.590499 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:31.590572 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:31.621443 1302865 cri.go:89] found id: ""
	I1213 14:58:31.621457 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.621467 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:31.621475 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:31.621485 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:31.689190 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:31.680703   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.681594   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.682366   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.683913   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.684339   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:31.680703   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.681594   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.682366   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.683913   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.684339   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:31.689199 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:31.689210 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:31.750918 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:31.750940 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:31.777989 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:31.778007 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:31.837415 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:31.837438 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:34.355220 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:34.365583 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:34.365646 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:34.390861 1302865 cri.go:89] found id: ""
	I1213 14:58:34.390875 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.390882 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:34.390887 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:34.390945 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:34.419452 1302865 cri.go:89] found id: ""
	I1213 14:58:34.419466 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.419473 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:34.419478 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:34.419540 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:34.444048 1302865 cri.go:89] found id: ""
	I1213 14:58:34.444062 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.444069 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:34.444073 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:34.444135 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:34.472603 1302865 cri.go:89] found id: ""
	I1213 14:58:34.472617 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.472623 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:34.472629 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:34.472693 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:34.496330 1302865 cri.go:89] found id: ""
	I1213 14:58:34.496344 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.496351 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:34.496356 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:34.496415 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:34.521267 1302865 cri.go:89] found id: ""
	I1213 14:58:34.521281 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.521288 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:34.521294 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:34.521355 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:34.545219 1302865 cri.go:89] found id: ""
	I1213 14:58:34.545234 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.545241 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:34.545248 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:34.545263 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:34.611331 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:34.602304   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.603074   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.604885   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.605533   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.607098   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:34.602304   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.603074   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.604885   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.605533   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.607098   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:34.611342 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:34.611352 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:34.674005 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:34.674023 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:34.701768 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:34.701784 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:34.760313 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:34.760332 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:37.279813 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:37.289901 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:37.289961 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:37.314082 1302865 cri.go:89] found id: ""
	I1213 14:58:37.314097 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.314103 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:37.314115 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:37.314174 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:37.349456 1302865 cri.go:89] found id: ""
	I1213 14:58:37.349470 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.349477 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:37.349482 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:37.349540 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:37.376791 1302865 cri.go:89] found id: ""
	I1213 14:58:37.376805 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.376812 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:37.376817 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:37.376877 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:37.400702 1302865 cri.go:89] found id: ""
	I1213 14:58:37.400717 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.400724 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:37.400730 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:37.400792 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:37.424348 1302865 cri.go:89] found id: ""
	I1213 14:58:37.424363 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.424370 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:37.424375 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:37.424435 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:37.449182 1302865 cri.go:89] found id: ""
	I1213 14:58:37.449197 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.449204 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:37.449209 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:37.449270 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:37.476252 1302865 cri.go:89] found id: ""
	I1213 14:58:37.476266 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.476273 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:37.476280 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:37.476294 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:37.534602 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:37.534621 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:37.552019 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:37.552037 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:37.614270 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:37.605713   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.606478   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.608056   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.608699   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.610244   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:37.605713   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.606478   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.608056   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.608699   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.610244   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:37.614281 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:37.614292 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:37.676894 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:37.676913 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:40.209558 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:40.220003 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:40.220065 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:40.246553 1302865 cri.go:89] found id: ""
	I1213 14:58:40.246567 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.246574 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:40.246579 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:40.246642 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:40.270663 1302865 cri.go:89] found id: ""
	I1213 14:58:40.270677 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.270684 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:40.270689 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:40.270750 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:40.296263 1302865 cri.go:89] found id: ""
	I1213 14:58:40.296278 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.296285 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:40.296292 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:40.296352 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:40.320181 1302865 cri.go:89] found id: ""
	I1213 14:58:40.320195 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.320204 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:40.320208 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:40.320268 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:40.345140 1302865 cri.go:89] found id: ""
	I1213 14:58:40.345155 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.345162 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:40.345167 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:40.345236 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:40.368989 1302865 cri.go:89] found id: ""
	I1213 14:58:40.369003 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.369010 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:40.369015 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:40.369075 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:40.393631 1302865 cri.go:89] found id: ""
	I1213 14:58:40.393646 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.393653 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:40.393661 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:40.393672 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:40.421318 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:40.421334 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:40.480359 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:40.480379 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:40.497525 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:40.497544 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:40.565603 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:40.557124   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.557721   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.559295   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.559804   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.561673   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:40.557124   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.557721   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.559295   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.559804   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.561673   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:40.565614 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:40.565625 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:43.127433 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:43.141684 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:43.141744 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:43.166921 1302865 cri.go:89] found id: ""
	I1213 14:58:43.166935 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.166942 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:43.166947 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:43.167010 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:43.191796 1302865 cri.go:89] found id: ""
	I1213 14:58:43.191810 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.191817 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:43.191823 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:43.191883 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:43.220968 1302865 cri.go:89] found id: ""
	I1213 14:58:43.220982 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.220988 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:43.220993 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:43.221050 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:43.249138 1302865 cri.go:89] found id: ""
	I1213 14:58:43.249153 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.249160 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:43.249166 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:43.249226 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:43.273972 1302865 cri.go:89] found id: ""
	I1213 14:58:43.273986 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.273993 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:43.273998 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:43.274056 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:43.298424 1302865 cri.go:89] found id: ""
	I1213 14:58:43.298439 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.298446 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:43.298451 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:43.298523 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:43.326886 1302865 cri.go:89] found id: ""
	I1213 14:58:43.326900 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.326907 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:43.326915 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:43.326925 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:43.383183 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:43.383202 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:43.401545 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:43.401564 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:43.472321 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:43.463674   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.464321   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.466101   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.466707   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.468411   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:43.463674   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.464321   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.466101   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.466707   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.468411   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:43.472331 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:43.472347 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:43.535483 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:43.535504 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:46.069443 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:46.079671 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:46.079735 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:46.112232 1302865 cri.go:89] found id: ""
	I1213 14:58:46.112246 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.112263 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:46.112268 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:46.112334 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:46.143946 1302865 cri.go:89] found id: ""
	I1213 14:58:46.143960 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.143968 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:46.143973 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:46.144034 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:46.172869 1302865 cri.go:89] found id: ""
	I1213 14:58:46.172893 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.172901 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:46.172906 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:46.172969 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:46.198118 1302865 cri.go:89] found id: ""
	I1213 14:58:46.198132 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.198139 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:46.198144 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:46.198210 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:46.226657 1302865 cri.go:89] found id: ""
	I1213 14:58:46.226672 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.226679 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:46.226689 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:46.226750 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:46.250158 1302865 cri.go:89] found id: ""
	I1213 14:58:46.250183 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.250190 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:46.250199 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:46.250268 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:46.275259 1302865 cri.go:89] found id: ""
	I1213 14:58:46.275274 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.275281 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:46.275303 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:46.275335 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:46.349416 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:46.340779   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.341521   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.343041   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.343652   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.345289   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:46.340779   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.341521   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.343041   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.343652   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.345289   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:46.349427 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:46.349440 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:46.412854 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:46.412874 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:46.443625 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:46.443641 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:46.501088 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:46.501108 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:49.018999 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:49.029334 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:49.029404 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:49.054853 1302865 cri.go:89] found id: ""
	I1213 14:58:49.054867 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.054874 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:49.054879 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:49.054941 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:49.081166 1302865 cri.go:89] found id: ""
	I1213 14:58:49.081185 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.081193 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:49.081198 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:49.081261 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:49.109404 1302865 cri.go:89] found id: ""
	I1213 14:58:49.109418 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.109425 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:49.109430 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:49.109493 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:49.136643 1302865 cri.go:89] found id: ""
	I1213 14:58:49.136658 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.136665 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:49.136670 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:49.136741 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:49.165751 1302865 cri.go:89] found id: ""
	I1213 14:58:49.165765 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.165772 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:49.165777 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:49.165837 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:49.193225 1302865 cri.go:89] found id: ""
	I1213 14:58:49.193239 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.193246 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:49.193252 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:49.193314 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:49.221440 1302865 cri.go:89] found id: ""
	I1213 14:58:49.221455 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.221462 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:49.221470 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:49.221480 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:49.277216 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:49.277234 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:49.293907 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:49.293927 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:49.356075 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:49.348073   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.348458   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.350225   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.350571   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.352111   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:49.348073   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.348458   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.350225   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.350571   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.352111   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:49.356085 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:49.356095 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:49.418015 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:49.418034 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:51.951013 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:51.961457 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:51.961522 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:51.988624 1302865 cri.go:89] found id: ""
	I1213 14:58:51.988638 1302865 logs.go:282] 0 containers: []
	W1213 14:58:51.988645 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:51.988650 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:51.988725 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:52.015499 1302865 cri.go:89] found id: ""
	I1213 14:58:52.015513 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.015520 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:52.015526 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:52.015589 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:52.041762 1302865 cri.go:89] found id: ""
	I1213 14:58:52.041777 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.041784 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:52.041789 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:52.041850 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:52.068323 1302865 cri.go:89] found id: ""
	I1213 14:58:52.068338 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.068345 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:52.068350 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:52.068415 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:52.106065 1302865 cri.go:89] found id: ""
	I1213 14:58:52.106079 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.106086 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:52.106091 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:52.106160 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:52.140252 1302865 cri.go:89] found id: ""
	I1213 14:58:52.140272 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.140279 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:52.140284 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:52.140343 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:52.167100 1302865 cri.go:89] found id: ""
	I1213 14:58:52.167113 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.167120 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:52.167128 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:52.167138 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:52.226191 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:52.226210 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:52.243667 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:52.243683 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:52.311033 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:52.302537   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.303096   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.304747   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.305085   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.306664   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:52.302537   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.303096   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.304747   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.305085   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.306664   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:52.311046 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:52.311057 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:52.372679 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:52.372703 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:54.903108 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:54.913373 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:54.913436 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:54.938658 1302865 cri.go:89] found id: ""
	I1213 14:58:54.938673 1302865 logs.go:282] 0 containers: []
	W1213 14:58:54.938680 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:54.938686 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:54.938753 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:54.962838 1302865 cri.go:89] found id: ""
	I1213 14:58:54.962851 1302865 logs.go:282] 0 containers: []
	W1213 14:58:54.962866 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:54.962871 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:54.962942 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:54.988758 1302865 cri.go:89] found id: ""
	I1213 14:58:54.988773 1302865 logs.go:282] 0 containers: []
	W1213 14:58:54.988780 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:54.988785 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:54.988855 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:55.021177 1302865 cri.go:89] found id: ""
	I1213 14:58:55.021192 1302865 logs.go:282] 0 containers: []
	W1213 14:58:55.021200 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:55.021206 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:55.021272 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:55.049330 1302865 cri.go:89] found id: ""
	I1213 14:58:55.049344 1302865 logs.go:282] 0 containers: []
	W1213 14:58:55.049356 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:55.049361 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:55.049421 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:55.079835 1302865 cri.go:89] found id: ""
	I1213 14:58:55.079849 1302865 logs.go:282] 0 containers: []
	W1213 14:58:55.079856 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:55.079861 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:55.079920 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:55.107073 1302865 cri.go:89] found id: ""
	I1213 14:58:55.107087 1302865 logs.go:282] 0 containers: []
	W1213 14:58:55.107094 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:55.107102 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:55.107112 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:55.165853 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:55.165871 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:55.183109 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:55.183127 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:55.251642 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:55.242691   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.243211   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.244929   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.245470   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.247181   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:55.242691   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.243211   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.244929   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.245470   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.247181   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:55.251652 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:55.251664 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:55.317380 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:55.317399 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:57.847271 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:57.857537 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:57.857603 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:57.882391 1302865 cri.go:89] found id: ""
	I1213 14:58:57.882405 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.882412 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:57.882417 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:57.882490 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:57.905909 1302865 cri.go:89] found id: ""
	I1213 14:58:57.905923 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.905943 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:57.905948 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:57.906018 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:57.930237 1302865 cri.go:89] found id: ""
	I1213 14:58:57.930252 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.930259 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:57.930264 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:57.930337 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:57.958985 1302865 cri.go:89] found id: ""
	I1213 14:58:57.959014 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.959020 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:57.959031 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:57.959099 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:57.983693 1302865 cri.go:89] found id: ""
	I1213 14:58:57.983707 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.983714 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:57.983719 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:57.983779 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:58.012155 1302865 cri.go:89] found id: ""
	I1213 14:58:58.012170 1302865 logs.go:282] 0 containers: []
	W1213 14:58:58.012178 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:58.012183 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:58.012250 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:58.043700 1302865 cri.go:89] found id: ""
	I1213 14:58:58.043714 1302865 logs.go:282] 0 containers: []
	W1213 14:58:58.043722 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:58.043730 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:58.043742 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:58.105070 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:58.105098 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:58.123698 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:58.123717 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:58.194632 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:58.186276   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.187012   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.188759   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.189247   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.190768   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:58.186276   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.187012   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.188759   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.189247   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.190768   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:58.194642 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:58.194653 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:58.256210 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:58.256230 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:00.787680 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:00.798261 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:00.798326 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:00.826895 1302865 cri.go:89] found id: ""
	I1213 14:59:00.826908 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.826915 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:00.826921 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:00.826980 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:00.851410 1302865 cri.go:89] found id: ""
	I1213 14:59:00.851424 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.851431 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:00.851437 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:00.851510 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:00.876891 1302865 cri.go:89] found id: ""
	I1213 14:59:00.876906 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.876912 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:00.876917 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:00.876975 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:00.900564 1302865 cri.go:89] found id: ""
	I1213 14:59:00.900578 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.900585 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:00.900589 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:00.900647 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:00.925560 1302865 cri.go:89] found id: ""
	I1213 14:59:00.925574 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.925581 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:00.925586 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:00.925647 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:00.954298 1302865 cri.go:89] found id: ""
	I1213 14:59:00.954311 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.954319 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:00.954330 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:00.954388 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:00.980684 1302865 cri.go:89] found id: ""
	I1213 14:59:00.980698 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.980704 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:00.980718 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:00.980731 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:01.048024 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:01.039594   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.040152   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.041748   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.042294   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.043863   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:01.039594   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.040152   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.041748   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.042294   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.043863   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:01.048033 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:01.048044 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:01.110723 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:01.110742 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:01.144966 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:01.144983 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:01.203272 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:01.203301 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:03.722770 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:03.733112 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:03.733170 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:03.761042 1302865 cri.go:89] found id: ""
	I1213 14:59:03.761057 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.761064 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:03.761069 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:03.761130 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:03.789429 1302865 cri.go:89] found id: ""
	I1213 14:59:03.789443 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.789450 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:03.789455 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:03.789521 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:03.816916 1302865 cri.go:89] found id: ""
	I1213 14:59:03.816930 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.816937 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:03.816942 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:03.817001 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:03.844301 1302865 cri.go:89] found id: ""
	I1213 14:59:03.844317 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.844324 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:03.844329 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:03.844388 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:03.873060 1302865 cri.go:89] found id: ""
	I1213 14:59:03.873075 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.873082 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:03.873087 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:03.873147 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:03.910513 1302865 cri.go:89] found id: ""
	I1213 14:59:03.910527 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.910534 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:03.910539 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:03.910601 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:03.938039 1302865 cri.go:89] found id: ""
	I1213 14:59:03.938053 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.938060 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:03.938067 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:03.938077 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:03.993458 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:03.993478 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:04.011140 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:04.011157 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:04.078339 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:04.069502   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.070272   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.072094   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.072604   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.074176   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:04.069502   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.070272   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.072094   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.072604   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.074176   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:04.078350 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:04.078361 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:04.142915 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:04.142934 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:06.673444 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:06.683643 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:06.683703 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:06.708707 1302865 cri.go:89] found id: ""
	I1213 14:59:06.708727 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.708734 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:06.708739 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:06.708799 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:06.734465 1302865 cri.go:89] found id: ""
	I1213 14:59:06.734479 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.734486 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:06.734495 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:06.734584 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:06.759590 1302865 cri.go:89] found id: ""
	I1213 14:59:06.759603 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.759610 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:06.759615 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:06.759674 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:06.785693 1302865 cri.go:89] found id: ""
	I1213 14:59:06.785706 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.785713 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:06.785720 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:06.785777 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:06.810125 1302865 cri.go:89] found id: ""
	I1213 14:59:06.810139 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.810146 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:06.810151 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:06.810215 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:06.835783 1302865 cri.go:89] found id: ""
	I1213 14:59:06.835797 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.835804 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:06.835809 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:06.835869 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:06.860909 1302865 cri.go:89] found id: ""
	I1213 14:59:06.860922 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.860929 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:06.860936 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:06.860946 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:06.916027 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:06.916047 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:06.933118 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:06.933135 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:06.997759 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:06.989536   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.990242   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.991821   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.992353   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.993901   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:06.989536   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.990242   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.991821   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.992353   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.993901   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:06.997769 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:06.997779 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:07.059939 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:07.059961 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:09.591076 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:09.601913 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:09.601975 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:09.626204 1302865 cri.go:89] found id: ""
	I1213 14:59:09.626218 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.626225 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:09.626230 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:09.626289 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:09.653443 1302865 cri.go:89] found id: ""
	I1213 14:59:09.653457 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.653463 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:09.653469 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:09.653531 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:09.678836 1302865 cri.go:89] found id: ""
	I1213 14:59:09.678851 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.678858 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:09.678865 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:09.678924 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:09.704492 1302865 cri.go:89] found id: ""
	I1213 14:59:09.704506 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.704514 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:09.704519 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:09.704581 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:09.733333 1302865 cri.go:89] found id: ""
	I1213 14:59:09.733355 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.733363 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:09.733368 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:09.733431 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:09.758847 1302865 cri.go:89] found id: ""
	I1213 14:59:09.758861 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.758869 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:09.758874 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:09.758946 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:09.785932 1302865 cri.go:89] found id: ""
	I1213 14:59:09.785946 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.785953 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:09.785962 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:09.785973 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:09.842054 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:09.842073 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:09.859249 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:09.859273 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:09.924527 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:09.916673   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.917242   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.918722   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.919219   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.920662   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:09.916673   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.917242   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.918722   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.919219   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.920662   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:09.924536 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:09.924546 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:09.987531 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:09.987550 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:12.517373 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:12.529230 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:12.529292 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:12.558354 1302865 cri.go:89] found id: ""
	I1213 14:59:12.558368 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.558375 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:12.558380 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:12.558439 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:12.585312 1302865 cri.go:89] found id: ""
	I1213 14:59:12.585326 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.585333 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:12.585338 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:12.585396 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:12.613481 1302865 cri.go:89] found id: ""
	I1213 14:59:12.613494 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.613501 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:12.613506 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:12.613564 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:12.636592 1302865 cri.go:89] found id: ""
	I1213 14:59:12.636614 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.636621 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:12.636627 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:12.636694 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:12.660499 1302865 cri.go:89] found id: ""
	I1213 14:59:12.660513 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.660520 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:12.660524 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:12.660591 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:12.684274 1302865 cri.go:89] found id: ""
	I1213 14:59:12.684297 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.684304 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:12.684309 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:12.684377 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:12.715959 1302865 cri.go:89] found id: ""
	I1213 14:59:12.715973 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.715980 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:12.715992 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:12.716003 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:12.779780 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:12.771561   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.772194   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.773845   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.774330   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.775929   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:12.771561   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.772194   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.773845   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.774330   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.775929   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:12.779790 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:12.779801 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:12.840858 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:12.840877 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:12.870238 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:12.870256 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:12.930596 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:12.930615 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:15.449328 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:15.460194 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:15.460255 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:15.484663 1302865 cri.go:89] found id: ""
	I1213 14:59:15.484677 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.484683 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:15.484689 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:15.484799 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:15.513604 1302865 cri.go:89] found id: ""
	I1213 14:59:15.513619 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.513626 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:15.513631 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:15.513692 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:15.543496 1302865 cri.go:89] found id: ""
	I1213 14:59:15.543510 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.543517 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:15.543524 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:15.543596 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:15.576119 1302865 cri.go:89] found id: ""
	I1213 14:59:15.576133 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.576140 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:15.576145 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:15.576207 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:15.600649 1302865 cri.go:89] found id: ""
	I1213 14:59:15.600663 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.600670 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:15.600675 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:15.600743 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:15.624956 1302865 cri.go:89] found id: ""
	I1213 14:59:15.624970 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.624977 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:15.624984 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:15.625045 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:15.649687 1302865 cri.go:89] found id: ""
	I1213 14:59:15.649700 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.649707 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:15.649717 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:15.649728 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:15.711417 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:15.711439 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:15.739859 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:15.739876 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:15.796008 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:15.796027 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:15.813254 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:15.813271 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:15.889756 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:15.881341   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.881741   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.883505   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.883986   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.885525   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:15.881341   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.881741   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.883505   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.883986   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.885525   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:18.390805 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:18.401397 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:18.401458 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:18.426479 1302865 cri.go:89] found id: ""
	I1213 14:59:18.426493 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.426501 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:18.426507 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:18.426569 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:18.451763 1302865 cri.go:89] found id: ""
	I1213 14:59:18.451777 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.451784 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:18.451788 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:18.451846 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:18.475994 1302865 cri.go:89] found id: ""
	I1213 14:59:18.476008 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.476015 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:18.476020 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:18.476080 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:18.500350 1302865 cri.go:89] found id: ""
	I1213 14:59:18.500363 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.500371 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:18.500376 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:18.500436 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:18.524126 1302865 cri.go:89] found id: ""
	I1213 14:59:18.524178 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.524186 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:18.524191 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:18.524251 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:18.552637 1302865 cri.go:89] found id: ""
	I1213 14:59:18.552650 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.552657 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:18.552668 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:18.552735 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:18.576409 1302865 cri.go:89] found id: ""
	I1213 14:59:18.576423 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.576430 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:18.576437 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:18.576448 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:18.632727 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:18.632750 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:18.649857 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:18.649874 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:18.717909 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:18.709647   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.710444   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.712120   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.712687   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.714255   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:18.709647   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.710444   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.712120   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.712687   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.714255   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:18.717920 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:18.717930 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:18.779709 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:18.779731 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:21.307289 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:21.317675 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:21.317738 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:21.357856 1302865 cri.go:89] found id: ""
	I1213 14:59:21.357870 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.357886 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:21.357892 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:21.357952 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:21.383442 1302865 cri.go:89] found id: ""
	I1213 14:59:21.383456 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.383478 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:21.383483 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:21.383550 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:21.410523 1302865 cri.go:89] found id: ""
	I1213 14:59:21.410537 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.410544 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:21.410549 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:21.410606 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:21.437275 1302865 cri.go:89] found id: ""
	I1213 14:59:21.437289 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.437296 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:21.437303 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:21.437361 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:21.460786 1302865 cri.go:89] found id: ""
	I1213 14:59:21.460800 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.460807 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:21.460813 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:21.460871 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:21.484394 1302865 cri.go:89] found id: ""
	I1213 14:59:21.484409 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.484416 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:21.484422 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:21.484481 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:21.513384 1302865 cri.go:89] found id: ""
	I1213 14:59:21.513398 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.513405 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:21.513413 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:21.513423 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:21.568892 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:21.568912 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:21.586837 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:21.586854 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:21.662678 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:21.654029   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.654719   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.656490   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.657142   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.658705   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:21.654029   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.654719   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.656490   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.657142   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.658705   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:21.662688 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:21.662699 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:21.736289 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:21.736318 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:24.267273 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:24.277337 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:24.277401 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:24.300799 1302865 cri.go:89] found id: ""
	I1213 14:59:24.300813 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.300820 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:24.300825 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:24.300883 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:24.329119 1302865 cri.go:89] found id: ""
	I1213 14:59:24.329133 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.329140 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:24.329145 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:24.329207 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:24.359906 1302865 cri.go:89] found id: ""
	I1213 14:59:24.359920 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.359927 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:24.359934 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:24.359993 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:24.388174 1302865 cri.go:89] found id: ""
	I1213 14:59:24.388188 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.388195 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:24.388201 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:24.388265 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:24.416221 1302865 cri.go:89] found id: ""
	I1213 14:59:24.416235 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.416242 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:24.416247 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:24.416306 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:24.441358 1302865 cri.go:89] found id: ""
	I1213 14:59:24.441373 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.441380 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:24.441385 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:24.441444 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:24.465868 1302865 cri.go:89] found id: ""
	I1213 14:59:24.465882 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.465889 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:24.465897 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:24.465907 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:24.522170 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:24.522189 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:24.539720 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:24.539741 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:24.605986 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:24.597621   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.598252   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.599831   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.600201   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.601630   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:24.597621   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.598252   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.599831   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.600201   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.601630   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:24.605996 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:24.606007 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:24.667358 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:24.667377 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:27.195225 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:27.205377 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:27.205438 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:27.229665 1302865 cri.go:89] found id: ""
	I1213 14:59:27.229679 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.229686 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:27.229692 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:27.229755 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:27.253927 1302865 cri.go:89] found id: ""
	I1213 14:59:27.253943 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.253950 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:27.253961 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:27.254022 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:27.277865 1302865 cri.go:89] found id: ""
	I1213 14:59:27.277879 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.277886 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:27.277891 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:27.277949 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:27.305956 1302865 cri.go:89] found id: ""
	I1213 14:59:27.305969 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.305977 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:27.305982 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:27.306041 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:27.330227 1302865 cri.go:89] found id: ""
	I1213 14:59:27.330241 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.330248 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:27.330253 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:27.330312 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:27.367738 1302865 cri.go:89] found id: ""
	I1213 14:59:27.367752 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.367759 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:27.367764 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:27.367823 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:27.400224 1302865 cri.go:89] found id: ""
	I1213 14:59:27.400239 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.400254 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:27.400262 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:27.400272 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:27.428506 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:27.428525 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:27.484755 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:27.484775 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:27.501783 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:27.501800 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:27.568006 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:27.559400   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.559958   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.561857   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.562433   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.564051   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:27.559400   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.559958   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.561857   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.562433   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.564051   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:27.568017 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:27.568029 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:30.130924 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:30.142124 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:30.142187 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:30.168272 1302865 cri.go:89] found id: ""
	I1213 14:59:30.168286 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.168301 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:30.168306 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:30.168379 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:30.198491 1302865 cri.go:89] found id: ""
	I1213 14:59:30.198507 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.198515 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:30.198520 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:30.198583 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:30.224307 1302865 cri.go:89] found id: ""
	I1213 14:59:30.224321 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.224329 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:30.224334 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:30.224398 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:30.252127 1302865 cri.go:89] found id: ""
	I1213 14:59:30.252142 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.252150 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:30.252155 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:30.252216 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:30.277686 1302865 cri.go:89] found id: ""
	I1213 14:59:30.277700 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.277707 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:30.277712 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:30.277773 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:30.302751 1302865 cri.go:89] found id: ""
	I1213 14:59:30.302766 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.302773 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:30.302779 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:30.302864 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:30.331699 1302865 cri.go:89] found id: ""
	I1213 14:59:30.331713 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.331720 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:30.331727 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:30.331741 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:30.384091 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:30.384107 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:30.448178 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:30.448197 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:30.465395 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:30.465414 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:30.525911 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:30.518056   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.518498   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.519701   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.520227   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.521907   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:30.518056   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.518498   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.519701   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.520227   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.521907   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:30.525921 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:30.525931 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:33.088366 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:33.098677 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:33.098747 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:33.123559 1302865 cri.go:89] found id: ""
	I1213 14:59:33.123574 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.123581 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:33.123587 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:33.123648 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:33.149199 1302865 cri.go:89] found id: ""
	I1213 14:59:33.149214 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.149221 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:33.149231 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:33.149294 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:33.174660 1302865 cri.go:89] found id: ""
	I1213 14:59:33.174674 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.174681 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:33.174686 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:33.174747 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:33.199686 1302865 cri.go:89] found id: ""
	I1213 14:59:33.199701 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.199709 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:33.199714 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:33.199776 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:33.223975 1302865 cri.go:89] found id: ""
	I1213 14:59:33.223990 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.223997 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:33.224002 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:33.224062 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:33.248004 1302865 cri.go:89] found id: ""
	I1213 14:59:33.248019 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.248026 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:33.248032 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:33.248099 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:33.272806 1302865 cri.go:89] found id: ""
	I1213 14:59:33.272821 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.272829 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:33.272837 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:33.272847 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:33.300705 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:33.300722 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:33.363767 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:33.363786 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:33.382421 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:33.382440 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:33.450503 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:33.442115   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.442703   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.444236   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.444803   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.446325   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:33.442115   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.442703   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.444236   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.444803   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.446325   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:33.450514 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:33.450526 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:36.015724 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:36.026901 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:36.026965 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:36.053629 1302865 cri.go:89] found id: ""
	I1213 14:59:36.053645 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.053653 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:36.053658 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:36.053722 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:36.080154 1302865 cri.go:89] found id: ""
	I1213 14:59:36.080170 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.080177 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:36.080183 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:36.080247 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:36.105197 1302865 cri.go:89] found id: ""
	I1213 14:59:36.105212 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.105219 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:36.105224 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:36.105284 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:36.129426 1302865 cri.go:89] found id: ""
	I1213 14:59:36.129440 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.129453 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:36.129458 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:36.129516 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:36.157680 1302865 cri.go:89] found id: ""
	I1213 14:59:36.157695 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.157702 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:36.157707 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:36.157768 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:36.186306 1302865 cri.go:89] found id: ""
	I1213 14:59:36.186320 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.186327 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:36.186333 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:36.186404 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:36.210490 1302865 cri.go:89] found id: ""
	I1213 14:59:36.210504 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.210511 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:36.210518 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:36.210528 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:36.265225 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:36.265244 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:36.282625 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:36.282641 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:36.356056 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:36.344168   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.345304   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.346780   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.347080   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.348284   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:36.344168   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.345304   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.346780   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.347080   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.348284   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:36.356066 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:36.356078 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:36.426572 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:36.426595 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:38.953386 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:38.964071 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:38.964149 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:38.987398 1302865 cri.go:89] found id: ""
	I1213 14:59:38.987412 1302865 logs.go:282] 0 containers: []
	W1213 14:59:38.987420 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:38.987426 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:38.987501 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:39.014333 1302865 cri.go:89] found id: ""
	I1213 14:59:39.014348 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.014355 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:39.014360 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:39.014425 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:39.041685 1302865 cri.go:89] found id: ""
	I1213 14:59:39.041699 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.041706 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:39.041711 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:39.041773 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:39.065151 1302865 cri.go:89] found id: ""
	I1213 14:59:39.065165 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.065172 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:39.065177 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:39.065236 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:39.089601 1302865 cri.go:89] found id: ""
	I1213 14:59:39.089614 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.089621 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:39.089629 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:39.089695 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:39.114392 1302865 cri.go:89] found id: ""
	I1213 14:59:39.114406 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.114413 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:39.114418 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:39.114479 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:39.139175 1302865 cri.go:89] found id: ""
	I1213 14:59:39.139188 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.139195 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:39.139204 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:39.139214 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:39.194900 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:39.194920 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:39.212516 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:39.212534 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:39.278353 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:39.270327   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.270899   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.272513   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.272875   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.274310   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:39.270327   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.270899   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.272513   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.272875   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.274310   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:39.278363 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:39.278376 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:39.339218 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:39.339237 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:41.878578 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:41.888870 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:41.888930 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:41.916325 1302865 cri.go:89] found id: ""
	I1213 14:59:41.916339 1302865 logs.go:282] 0 containers: []
	W1213 14:59:41.916346 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:41.916352 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:41.916408 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:41.940631 1302865 cri.go:89] found id: ""
	I1213 14:59:41.940646 1302865 logs.go:282] 0 containers: []
	W1213 14:59:41.940653 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:41.940658 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:41.940721 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:41.964819 1302865 cri.go:89] found id: ""
	I1213 14:59:41.964835 1302865 logs.go:282] 0 containers: []
	W1213 14:59:41.964842 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:41.964847 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:41.964909 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:41.992880 1302865 cri.go:89] found id: ""
	I1213 14:59:41.992895 1302865 logs.go:282] 0 containers: []
	W1213 14:59:41.992902 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:41.992907 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:41.992966 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:42.037181 1302865 cri.go:89] found id: ""
	I1213 14:59:42.037196 1302865 logs.go:282] 0 containers: []
	W1213 14:59:42.037203 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:42.037208 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:42.037272 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:42.066224 1302865 cri.go:89] found id: ""
	I1213 14:59:42.066240 1302865 logs.go:282] 0 containers: []
	W1213 14:59:42.066247 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:42.066253 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:42.066324 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:42.113241 1302865 cri.go:89] found id: ""
	I1213 14:59:42.113259 1302865 logs.go:282] 0 containers: []
	W1213 14:59:42.113267 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:42.113275 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:42.113288 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:42.174660 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:42.174686 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:42.197359 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:42.197391 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:42.287788 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:42.278004   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.278734   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.280708   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.281502   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.282312   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:42.278004   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.278734   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.280708   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.281502   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.282312   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:42.287799 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:42.287810 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:42.353033 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:42.353052 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:44.892059 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:44.902815 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:44.902875 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:44.927725 1302865 cri.go:89] found id: ""
	I1213 14:59:44.927740 1302865 logs.go:282] 0 containers: []
	W1213 14:59:44.927747 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:44.927752 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:44.927815 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:44.957287 1302865 cri.go:89] found id: ""
	I1213 14:59:44.957301 1302865 logs.go:282] 0 containers: []
	W1213 14:59:44.957308 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:44.957313 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:44.957371 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:44.982138 1302865 cri.go:89] found id: ""
	I1213 14:59:44.982153 1302865 logs.go:282] 0 containers: []
	W1213 14:59:44.982160 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:44.982166 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:44.982225 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:45.025671 1302865 cri.go:89] found id: ""
	I1213 14:59:45.025689 1302865 logs.go:282] 0 containers: []
	W1213 14:59:45.025697 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:45.025704 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:45.025777 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:45.070096 1302865 cri.go:89] found id: ""
	I1213 14:59:45.070112 1302865 logs.go:282] 0 containers: []
	W1213 14:59:45.070121 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:45.070126 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:45.070203 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:45.113264 1302865 cri.go:89] found id: ""
	I1213 14:59:45.113281 1302865 logs.go:282] 0 containers: []
	W1213 14:59:45.113289 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:45.113302 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:45.113391 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:45.146027 1302865 cri.go:89] found id: ""
	I1213 14:59:45.146050 1302865 logs.go:282] 0 containers: []
	W1213 14:59:45.146058 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:45.146073 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:45.146084 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:45.242018 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:45.242086 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:45.278598 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:45.278619 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:45.377053 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:45.367099   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.369078   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.369934   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.371774   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.372065   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:45.367099   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.369078   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.369934   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.371774   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.372065   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:45.377063 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:45.377073 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:45.449162 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:45.449183 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:47.980927 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:47.991934 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:47.991998 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:48.022075 1302865 cri.go:89] found id: ""
	I1213 14:59:48.022091 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.022098 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:48.022103 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:48.022169 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:48.052438 1302865 cri.go:89] found id: ""
	I1213 14:59:48.052454 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.052461 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:48.052466 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:48.052543 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:48.077918 1302865 cri.go:89] found id: ""
	I1213 14:59:48.077932 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.077940 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:48.077945 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:48.078008 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:48.107677 1302865 cri.go:89] found id: ""
	I1213 14:59:48.107691 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.107698 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:48.107703 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:48.107803 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:48.134492 1302865 cri.go:89] found id: ""
	I1213 14:59:48.134506 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.134514 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:48.134523 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:48.134616 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:48.159260 1302865 cri.go:89] found id: ""
	I1213 14:59:48.159274 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.159281 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:48.159286 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:48.159368 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:48.184905 1302865 cri.go:89] found id: ""
	I1213 14:59:48.184920 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.184927 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:48.184935 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:48.184945 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:48.240512 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:48.240535 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:48.257663 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:48.257683 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:48.323284 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:48.314914   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.315437   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.317182   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.317842   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.319544   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:48.314914   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.315437   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.317182   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.317842   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.319544   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:48.323295 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:48.323306 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:48.393384 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:48.393403 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:50.925922 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:50.936831 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:50.936895 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:50.963232 1302865 cri.go:89] found id: ""
	I1213 14:59:50.963246 1302865 logs.go:282] 0 containers: []
	W1213 14:59:50.963253 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:50.963258 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:50.963354 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:50.993552 1302865 cri.go:89] found id: ""
	I1213 14:59:50.993566 1302865 logs.go:282] 0 containers: []
	W1213 14:59:50.993572 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:50.993578 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:50.993639 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:51.021945 1302865 cri.go:89] found id: ""
	I1213 14:59:51.021978 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.021986 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:51.021991 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:51.022051 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:51.049002 1302865 cri.go:89] found id: ""
	I1213 14:59:51.049017 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.049024 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:51.049029 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:51.049113 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:51.075979 1302865 cri.go:89] found id: ""
	I1213 14:59:51.075995 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.076003 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:51.076008 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:51.076071 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:51.101633 1302865 cri.go:89] found id: ""
	I1213 14:59:51.101648 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.101656 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:51.101661 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:51.101724 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:51.128983 1302865 cri.go:89] found id: ""
	I1213 14:59:51.128999 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.129007 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:51.129015 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:51.129025 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:51.185511 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:51.185538 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:51.203284 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:51.203306 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:51.265859 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:51.257020   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.257804   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.259431   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.260025   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.261672   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:51.257020   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.257804   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.259431   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.260025   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.261672   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:51.265869 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:51.265880 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:51.328096 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:51.328116 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:53.857136 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:53.867344 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:53.867405 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:53.890843 1302865 cri.go:89] found id: ""
	I1213 14:59:53.890857 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.890864 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:53.890869 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:53.890927 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:53.915236 1302865 cri.go:89] found id: ""
	I1213 14:59:53.915250 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.915258 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:53.915263 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:53.915341 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:53.939500 1302865 cri.go:89] found id: ""
	I1213 14:59:53.939515 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.939523 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:53.939528 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:53.939588 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:53.968671 1302865 cri.go:89] found id: ""
	I1213 14:59:53.968686 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.968693 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:53.968698 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:53.968766 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:53.992869 1302865 cri.go:89] found id: ""
	I1213 14:59:53.992883 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.992895 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:53.992900 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:53.992962 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:54.020494 1302865 cri.go:89] found id: ""
	I1213 14:59:54.020510 1302865 logs.go:282] 0 containers: []
	W1213 14:59:54.020518 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:54.020524 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:54.020587 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:54.047224 1302865 cri.go:89] found id: ""
	I1213 14:59:54.047240 1302865 logs.go:282] 0 containers: []
	W1213 14:59:54.047247 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:54.047256 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:54.047268 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:54.064625 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:54.064643 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:54.131051 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:54.122613   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.123186   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.124913   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.125456   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.126931   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:54.122613   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.123186   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.124913   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.125456   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.126931   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:54.131061 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:54.131072 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:54.198481 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:54.198502 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:54.229657 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:54.229673 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:56.788389 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:56.798893 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:56.798978 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:56.825463 1302865 cri.go:89] found id: ""
	I1213 14:59:56.825479 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.825486 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:56.825491 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:56.825569 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:56.850902 1302865 cri.go:89] found id: ""
	I1213 14:59:56.850916 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.850923 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:56.850928 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:56.850997 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:56.875729 1302865 cri.go:89] found id: ""
	I1213 14:59:56.875743 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.875750 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:56.875755 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:56.875812 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:56.904598 1302865 cri.go:89] found id: ""
	I1213 14:59:56.904612 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.904619 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:56.904624 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:56.904684 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:56.929612 1302865 cri.go:89] found id: ""
	I1213 14:59:56.929626 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.929633 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:56.929639 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:56.929696 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:56.954323 1302865 cri.go:89] found id: ""
	I1213 14:59:56.954337 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.954345 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:56.954350 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:56.954411 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:56.978916 1302865 cri.go:89] found id: ""
	I1213 14:59:56.978930 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.978937 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:56.978944 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:56.978955 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:56.996271 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:56.996290 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:57.067201 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:57.058255   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.058923   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.060482   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.061159   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.062082   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:57.058255   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.058923   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.060482   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.061159   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.062082   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:57.067214 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:57.067227 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:57.129467 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:57.129486 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:57.160756 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:57.160773 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:59.726541 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:59.737128 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:59.737192 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:59.762034 1302865 cri.go:89] found id: ""
	I1213 14:59:59.762050 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.762057 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:59.762063 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:59.762136 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:59.786710 1302865 cri.go:89] found id: ""
	I1213 14:59:59.786724 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.786731 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:59.786738 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:59.786799 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:59.823635 1302865 cri.go:89] found id: ""
	I1213 14:59:59.823649 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.823656 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:59.823661 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:59.823721 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:59.853555 1302865 cri.go:89] found id: ""
	I1213 14:59:59.853568 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.853576 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:59.853580 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:59.853639 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:59.878766 1302865 cri.go:89] found id: ""
	I1213 14:59:59.878781 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.878788 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:59.878793 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:59.878853 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:59.904985 1302865 cri.go:89] found id: ""
	I1213 14:59:59.904999 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.905006 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:59.905012 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:59.905084 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:59.929868 1302865 cri.go:89] found id: ""
	I1213 14:59:59.929882 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.929889 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:59.929896 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:59.929906 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:59.991222 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:59.991242 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:00:00.071719 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 15:00:00.071740 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:00:00.209914 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 15:00:00.209948 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:00:00.266871 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:00:00.266916 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:00:00.606023 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 15:00:00.575459   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.581155   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.582626   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.584864   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.585965   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 15:00:00.575459   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.581155   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.582626   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.584864   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.585965   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:00:03.107691 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:00:03.118897 1302865 kubeadm.go:602] duration metric: took 4m4.796487812s to restartPrimaryControlPlane
	W1213 15:00:03.118966 1302865 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 15:00:03.119044 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 15:00:03.535783 1302865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 15:00:03.550485 1302865 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 15:00:03.558915 1302865 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 15:00:03.558988 1302865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 15:00:03.567415 1302865 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 15:00:03.567426 1302865 kubeadm.go:158] found existing configuration files:
	
	I1213 15:00:03.567481 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 15:00:03.576037 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 15:00:03.576097 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 15:00:03.584074 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 15:00:03.592593 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 15:00:03.592651 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 15:00:03.601062 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 15:00:03.609623 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 15:00:03.609683 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 15:00:03.617551 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 15:00:03.625819 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 15:00:03.625879 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 15:00:03.634092 1302865 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 15:00:03.677773 1302865 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 15:00:03.677823 1302865 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 15:00:03.751455 1302865 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 15:00:03.751520 1302865 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 15:00:03.751555 1302865 kubeadm.go:319] OS: Linux
	I1213 15:00:03.751599 1302865 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 15:00:03.751646 1302865 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 15:00:03.751692 1302865 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 15:00:03.751738 1302865 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 15:00:03.751785 1302865 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 15:00:03.751832 1302865 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 15:00:03.751877 1302865 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 15:00:03.751923 1302865 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 15:00:03.751968 1302865 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 15:00:03.818698 1302865 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 15:00:03.818804 1302865 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 15:00:03.818894 1302865 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 15:00:03.825177 1302865 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 15:00:03.828382 1302865 out.go:252]   - Generating certificates and keys ...
	I1213 15:00:03.828484 1302865 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 15:00:03.828568 1302865 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 15:00:03.828657 1302865 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 15:00:03.828722 1302865 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 15:00:03.828813 1302865 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 15:00:03.828870 1302865 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 15:00:03.828941 1302865 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 15:00:03.829005 1302865 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 15:00:03.829084 1302865 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 15:00:03.829160 1302865 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 15:00:03.829199 1302865 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 15:00:03.829258 1302865 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 15:00:04.177571 1302865 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 15:00:04.342429 1302865 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 15:00:04.668058 1302865 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 15:00:04.760444 1302865 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 15:00:05.013305 1302865 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 15:00:05.014367 1302865 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 15:00:05.019071 1302865 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 15:00:05.022340 1302865 out.go:252]   - Booting up control plane ...
	I1213 15:00:05.022442 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 15:00:05.022520 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 15:00:05.022586 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 15:00:05.042894 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 15:00:05.043146 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 15:00:05.050754 1302865 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 15:00:05.051023 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 15:00:05.051065 1302865 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 15:00:05.191860 1302865 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 15:00:05.191979 1302865 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 15:04:05.190333 1302865 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000252344s
	I1213 15:04:05.190362 1302865 kubeadm.go:319] 
	I1213 15:04:05.190420 1302865 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 15:04:05.190453 1302865 kubeadm.go:319] 	- The kubelet is not running
	I1213 15:04:05.190557 1302865 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 15:04:05.190562 1302865 kubeadm.go:319] 
	I1213 15:04:05.190665 1302865 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 15:04:05.190696 1302865 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 15:04:05.190726 1302865 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 15:04:05.190729 1302865 kubeadm.go:319] 
	I1213 15:04:05.195506 1302865 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 15:04:05.195924 1302865 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 15:04:05.196033 1302865 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 15:04:05.196267 1302865 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 15:04:05.196271 1302865 kubeadm.go:319] 
	I1213 15:04:05.196339 1302865 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 15:04:05.196471 1302865 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000252344s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 15:04:05.196557 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 15:04:05.613572 1302865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 15:04:05.627532 1302865 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 15:04:05.627586 1302865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 15:04:05.635470 1302865 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 15:04:05.635487 1302865 kubeadm.go:158] found existing configuration files:
	
	I1213 15:04:05.635549 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 15:04:05.643770 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 15:04:05.643832 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 15:04:05.651305 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 15:04:05.659066 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 15:04:05.659119 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 15:04:05.666497 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 15:04:05.674867 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 15:04:05.674922 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 15:04:05.682604 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 15:04:05.690488 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 15:04:05.690547 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 15:04:05.697863 1302865 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 15:04:05.737903 1302865 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 15:04:05.738332 1302865 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 15:04:05.824821 1302865 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 15:04:05.824881 1302865 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 15:04:05.824914 1302865 kubeadm.go:319] OS: Linux
	I1213 15:04:05.824955 1302865 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 15:04:05.825000 1302865 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 15:04:05.825043 1302865 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 15:04:05.825103 1302865 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 15:04:05.825147 1302865 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 15:04:05.825200 1302865 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 15:04:05.825250 1302865 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 15:04:05.825294 1302865 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 15:04:05.825336 1302865 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 15:04:05.892296 1302865 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 15:04:05.892418 1302865 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 15:04:05.892526 1302865 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 15:04:05.898143 1302865 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 15:04:05.903540 1302865 out.go:252]   - Generating certificates and keys ...
	I1213 15:04:05.903629 1302865 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 15:04:05.903698 1302865 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 15:04:05.903775 1302865 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 15:04:05.903837 1302865 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 15:04:05.903908 1302865 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 15:04:05.903958 1302865 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 15:04:05.904021 1302865 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 15:04:05.904084 1302865 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 15:04:05.904160 1302865 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 15:04:05.904234 1302865 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 15:04:05.904275 1302865 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 15:04:05.904330 1302865 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 15:04:05.992570 1302865 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 15:04:06.166280 1302865 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 15:04:06.244452 1302865 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 15:04:06.386969 1302865 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 15:04:06.630629 1302865 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 15:04:06.631865 1302865 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 15:04:06.635872 1302865 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 15:04:06.639278 1302865 out.go:252]   - Booting up control plane ...
	I1213 15:04:06.639389 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 15:04:06.639462 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 15:04:06.639523 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 15:04:06.659049 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 15:04:06.659158 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 15:04:06.666661 1302865 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 15:04:06.666977 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 15:04:06.667151 1302865 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 15:04:06.810085 1302865 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 15:04:06.810198 1302865 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 15:08:06.809904 1302865 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000225024s
	I1213 15:08:06.809924 1302865 kubeadm.go:319] 
	I1213 15:08:06.810412 1302865 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 15:08:06.810499 1302865 kubeadm.go:319] 	- The kubelet is not running
	I1213 15:08:06.810921 1302865 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 15:08:06.810931 1302865 kubeadm.go:319] 
	I1213 15:08:06.811146 1302865 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 15:08:06.811211 1302865 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 15:08:06.811291 1302865 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 15:08:06.811302 1302865 kubeadm.go:319] 
	I1213 15:08:06.814720 1302865 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 15:08:06.816724 1302865 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 15:08:06.816881 1302865 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 15:08:06.817212 1302865 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 15:08:06.817216 1302865 kubeadm.go:319] 
	I1213 15:08:06.817309 1302865 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 15:08:06.817355 1302865 kubeadm.go:403] duration metric: took 12m8.532180676s to StartCluster
	I1213 15:08:06.817385 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:08:06.817448 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:08:06.841821 1302865 cri.go:89] found id: ""
	I1213 15:08:06.841835 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.841841 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 15:08:06.841847 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:08:06.841909 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:08:06.865102 1302865 cri.go:89] found id: ""
	I1213 15:08:06.865122 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.865129 1302865 logs.go:284] No container was found matching "etcd"
	I1213 15:08:06.865134 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:08:06.865194 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:08:06.889354 1302865 cri.go:89] found id: ""
	I1213 15:08:06.889369 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.889376 1302865 logs.go:284] No container was found matching "coredns"
	I1213 15:08:06.889381 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:08:06.889444 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:08:06.916987 1302865 cri.go:89] found id: ""
	I1213 15:08:06.917001 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.917008 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 15:08:06.917014 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:08:06.917074 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:08:06.941966 1302865 cri.go:89] found id: ""
	I1213 15:08:06.941980 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.941987 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:08:06.941992 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:08:06.942053 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:08:06.967555 1302865 cri.go:89] found id: ""
	I1213 15:08:06.967570 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.967576 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 15:08:06.967582 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:08:06.967642 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:08:06.990643 1302865 cri.go:89] found id: ""
	I1213 15:08:06.990661 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.990669 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 15:08:06.990677 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 15:08:06.990688 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:08:07.046948 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 15:08:07.046967 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:08:07.064271 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:08:07.064292 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:08:07.156681 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 15:08:07.142614   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.149501   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.150219   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.151350   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.152858   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 15:08:07.142614   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.149501   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.150219   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.151350   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.152858   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:08:07.156693 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 15:08:07.156703 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:08:07.225180 1302865 logs.go:123] Gathering logs for container status ...
	I1213 15:08:07.225205 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 15:08:07.257292 1302865 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000225024s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 15:08:07.257342 1302865 out.go:285] * 
	W1213 15:08:07.257449 1302865 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000225024s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 15:08:07.257519 1302865 out.go:285] * 
	W1213 15:08:07.259853 1302865 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 15:08:07.265906 1302865 out.go:203] 
	W1213 15:08:07.268865 1302865 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000225024s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 15:08:07.268911 1302865 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 15:08:07.268933 1302865 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 15:08:07.272012 1302865 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 15:08:16 functional-562018 containerd[9685]: time="2025-12-13T15:08:16.614640453Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-562018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:08:17 functional-562018 containerd[9685]: time="2025-12-13T15:08:17.594699770Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-562018\""
	Dec 13 15:08:17 functional-562018 containerd[9685]: time="2025-12-13T15:08:17.603547510Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-562018\""
	Dec 13 15:08:17 functional-562018 containerd[9685]: time="2025-12-13T15:08:17.603653813Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 13 15:08:17 functional-562018 containerd[9685]: time="2025-12-13T15:08:17.607908789Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-562018\" returns successfully"
	Dec 13 15:08:17 functional-562018 containerd[9685]: time="2025-12-13T15:08:17.989472917Z" level=info msg="No images store for sha256:3fb21f6d7fe9fd863c3548cb9498b8e552e958f0a50edc71e300f38a249a8021"
	Dec 13 15:08:17 functional-562018 containerd[9685]: time="2025-12-13T15:08:17.991836514Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-562018\""
	Dec 13 15:08:17 functional-562018 containerd[9685]: time="2025-12-13T15:08:17.999814739Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:08:18 functional-562018 containerd[9685]: time="2025-12-13T15:08:18.000343226Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-562018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.424371600Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-562018\""
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.427299481Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-562018\""
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.429590825Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.438723433Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-562018\" returns successfully"
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.738866011Z" level=info msg="No images store for sha256:3fb21f6d7fe9fd863c3548cb9498b8e552e958f0a50edc71e300f38a249a8021"
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.741155321Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-562018\""
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.748278873Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.748608153Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-562018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:08:20 functional-562018 containerd[9685]: time="2025-12-13T15:08:20.747498767Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-562018\""
	Dec 13 15:08:20 functional-562018 containerd[9685]: time="2025-12-13T15:08:20.750124437Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-562018\""
	Dec 13 15:08:20 functional-562018 containerd[9685]: time="2025-12-13T15:08:20.752467907Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 13 15:08:20 functional-562018 containerd[9685]: time="2025-12-13T15:08:20.765182475Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-562018\" returns successfully"
	Dec 13 15:08:21 functional-562018 containerd[9685]: time="2025-12-13T15:08:21.628092462Z" level=info msg="No images store for sha256:bffe89cb060c176804db60dc616d4e1117e4c9cbe423e0274bf52a76645edb04"
	Dec 13 15:08:21 functional-562018 containerd[9685]: time="2025-12-13T15:08:21.630292191Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-562018\""
	Dec 13 15:08:21 functional-562018 containerd[9685]: time="2025-12-13T15:08:21.637226743Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:08:21 functional-562018 containerd[9685]: time="2025-12-13T15:08:21.637535149Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-562018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 15:10:03.309981   23157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:10:03.310616   23157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:10:03.312426   23157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:10:03.312931   23157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:10:03.314505   23157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.759368] overlayfs: idmapped layers are currently not supported
	[Dec13 13:37] overlayfs: idmapped layers are currently not supported
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 15:10:03 up  6:52,  0 user,  load average: 0.46, 0.40, 0.50
	Linux functional-562018 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 15:10:00 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:10:00 functional-562018 kubelet[23041]: E1213 15:10:00.448389   23041 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:10:00 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:10:00 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:10:01 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 472.
	Dec 13 15:10:01 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:10:01 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:10:01 functional-562018 kubelet[23047]: E1213 15:10:01.137344   23047 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:10:01 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:10:01 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:10:01 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 473.
	Dec 13 15:10:01 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:10:01 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:10:01 functional-562018 kubelet[23052]: E1213 15:10:01.905771   23052 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:10:01 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:10:01 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:10:02 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 474.
	Dec 13 15:10:02 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:10:02 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:10:02 functional-562018 kubelet[23073]: E1213 15:10:02.657528   23073 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:10:02 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:10:02 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:10:03 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 475.
	Dec 13 15:10:03 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:10:03 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018: exit status 2 (358.608141ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-562018" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 15:08:33.826950 1252934 retry.go:31] will retry after 2.288315455s: Temporary Error: Get "http://10.109.93.161": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1213 15:08:42.552423 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 15:08:46.116502 1252934 retry.go:31] will retry after 5.370514557s: Temporary Error: Get "http://10.109.93.161": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 15:09:01.487416 1252934 retry.go:31] will retry after 6.770002765s: Temporary Error: Get "http://10.109.93.161": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 15:09:18.257697 1252934 retry.go:31] will retry after 14.642596967s: Temporary Error: Get "http://10.109.93.161": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 15:09:42.902264 1252934 retry.go:31] will retry after 8.594919333s: Temporary Error: Get "http://10.109.93.161": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1213 15:11:45.629574 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018: exit status 2 (355.225515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-562018" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-562018
helpers_test.go:244: (dbg) docker inspect functional-562018:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
	        "Created": "2025-12-13T14:41:15.451086653Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1291703,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T14:41:15.527927053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hostname",
	        "HostsPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hosts",
	        "LogPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648-json.log",
	        "Name": "/functional-562018",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-562018:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-562018",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
	                "LowerDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-562018",
	                "Source": "/var/lib/docker/volumes/functional-562018/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-562018",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-562018",
	                "name.minikube.sigs.k8s.io": "functional-562018",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f4b22297a29553cdd0dbc4eaa766abcdb1e67465ee18f1e2f5f9917dc8cf6d08",
	            "SandboxKey": "/var/run/docker/netns/f4b22297a295",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33918"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33919"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33922"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33920"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33921"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-562018": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:f3:95:ff:30:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd2e7aed753870546b4bccdeed8073ad5795c14334e906d06b964f72dc448c38",
	                    "EndpointID": "a0e947c1e40f773105c811b67b7d1d63f19d3a20060380bbde944bf9bfe39be5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-562018",
	                        "2cd1277ca783"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018: exit status 2 (309.475136ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-562018 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ ssh            │ functional-562018 ssh -- ls -la /mount-9p                                                                                                           │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ ssh            │ functional-562018 ssh sudo umount -f /mount-9p                                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ mount          │ -p functional-562018 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1452398466/001:/mount2 --alsologtostderr -v=1                │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ mount          │ -p functional-562018 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1452398466/001:/mount1 --alsologtostderr -v=1                │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ mount          │ -p functional-562018 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1452398466/001:/mount3 --alsologtostderr -v=1                │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ ssh            │ functional-562018 ssh findmnt -T /mount1                                                                                                            │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ ssh            │ functional-562018 ssh findmnt -T /mount1                                                                                                            │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ ssh            │ functional-562018 ssh findmnt -T /mount2                                                                                                            │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ ssh            │ functional-562018 ssh findmnt -T /mount3                                                                                                            │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ mount          │ -p functional-562018 --kill=true                                                                                                                    │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ start          │ -p functional-562018 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ start          │ -p functional-562018 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ start          │ -p functional-562018 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-562018 --alsologtostderr -v=1                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ update-context │ functional-562018 update-context --alsologtostderr -v=2                                                                                             │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ update-context │ functional-562018 update-context --alsologtostderr -v=2                                                                                             │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ update-context │ functional-562018 update-context --alsologtostderr -v=2                                                                                             │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ image          │ functional-562018 image ls --format short --alsologtostderr                                                                                         │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ image          │ functional-562018 image ls --format yaml --alsologtostderr                                                                                          │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ ssh            │ functional-562018 ssh pgrep buildkitd                                                                                                               │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │                     │
	│ image          │ functional-562018 image build -t localhost/my-image:functional-562018 testdata/build --alsologtostderr                                              │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ image          │ functional-562018 image ls                                                                                                                          │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ image          │ functional-562018 image ls --format json --alsologtostderr                                                                                          │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	│ image          │ functional-562018 image ls --format table --alsologtostderr                                                                                         │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:10 UTC │ 13 Dec 25 15:10 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 15:10:14
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 15:10:14.924491 1321719 out.go:360] Setting OutFile to fd 1 ...
	I1213 15:10:14.924614 1321719 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:10:14.924624 1321719 out.go:374] Setting ErrFile to fd 2...
	I1213 15:10:14.924629 1321719 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:10:14.925025 1321719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 15:10:14.925420 1321719 out.go:368] Setting JSON to false
	I1213 15:10:14.926253 1321719 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24764,"bootTime":1765613851,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 15:10:14.926325 1321719 start.go:143] virtualization:  
	I1213 15:10:14.929616 1321719 out.go:179] * [functional-562018] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1213 15:10:14.933349 1321719 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 15:10:14.933442 1321719 notify.go:221] Checking for updates...
	I1213 15:10:14.939184 1321719 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 15:10:14.942058 1321719 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 15:10:14.944885 1321719 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 15:10:14.947818 1321719 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 15:10:14.950726 1321719 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 15:10:14.954103 1321719 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 15:10:14.954711 1321719 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 15:10:14.977567 1321719 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 15:10:14.977713 1321719 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 15:10:15.066292 1321719 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 15:10:15.055562981 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 15:10:15.066417 1321719 docker.go:319] overlay module found
	I1213 15:10:15.069497 1321719 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1213 15:10:15.072536 1321719 start.go:309] selected driver: docker
	I1213 15:10:15.072573 1321719 start.go:927] validating driver "docker" against &{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 15:10:15.072699 1321719 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 15:10:15.076744 1321719 out.go:203] 
	W1213 15:10:15.079852 1321719 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 15:10:15.082795 1321719 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 15:08:17 functional-562018 containerd[9685]: time="2025-12-13T15:08:17.999814739Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:08:18 functional-562018 containerd[9685]: time="2025-12-13T15:08:18.000343226Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-562018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.424371600Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-562018\""
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.427299481Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-562018\""
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.429590825Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.438723433Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-562018\" returns successfully"
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.738866011Z" level=info msg="No images store for sha256:3fb21f6d7fe9fd863c3548cb9498b8e552e958f0a50edc71e300f38a249a8021"
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.741155321Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-562018\""
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.748278873Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:08:19 functional-562018 containerd[9685]: time="2025-12-13T15:08:19.748608153Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-562018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:08:20 functional-562018 containerd[9685]: time="2025-12-13T15:08:20.747498767Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-562018\""
	Dec 13 15:08:20 functional-562018 containerd[9685]: time="2025-12-13T15:08:20.750124437Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-562018\""
	Dec 13 15:08:20 functional-562018 containerd[9685]: time="2025-12-13T15:08:20.752467907Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 13 15:08:20 functional-562018 containerd[9685]: time="2025-12-13T15:08:20.765182475Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-562018\" returns successfully"
	Dec 13 15:08:21 functional-562018 containerd[9685]: time="2025-12-13T15:08:21.628092462Z" level=info msg="No images store for sha256:bffe89cb060c176804db60dc616d4e1117e4c9cbe423e0274bf52a76645edb04"
	Dec 13 15:08:21 functional-562018 containerd[9685]: time="2025-12-13T15:08:21.630292191Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-562018\""
	Dec 13 15:08:21 functional-562018 containerd[9685]: time="2025-12-13T15:08:21.637226743Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:08:21 functional-562018 containerd[9685]: time="2025-12-13T15:08:21.637535149Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-562018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:10:21 functional-562018 containerd[9685]: time="2025-12-13T15:10:21.161624245Z" level=info msg="connecting to shim l9492p1c95et9z4ftyhnj8p8l" address="unix:///run/containerd/s/61892c9da53f4323a58343a8aae549b3951b5842d7539c8cb32d0b9beed383b1" namespace=k8s.io protocol=ttrpc version=3
	Dec 13 15:10:21 functional-562018 containerd[9685]: time="2025-12-13T15:10:21.242581304Z" level=info msg="shim disconnected" id=l9492p1c95et9z4ftyhnj8p8l namespace=k8s.io
	Dec 13 15:10:21 functional-562018 containerd[9685]: time="2025-12-13T15:10:21.242903011Z" level=info msg="cleaning up after shim disconnected" id=l9492p1c95et9z4ftyhnj8p8l namespace=k8s.io
	Dec 13 15:10:21 functional-562018 containerd[9685]: time="2025-12-13T15:10:21.243002561Z" level=info msg="cleaning up dead shim" id=l9492p1c95et9z4ftyhnj8p8l namespace=k8s.io
	Dec 13 15:10:21 functional-562018 containerd[9685]: time="2025-12-13T15:10:21.497473176Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-562018\""
	Dec 13 15:10:21 functional-562018 containerd[9685]: time="2025-12-13T15:10:21.509061381Z" level=info msg="ImageCreate event name:\"sha256:acdceb19f63104a58da256dad168d902af0a1e5017b8bd59dbaccc8f16472693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:10:21 functional-562018 containerd[9685]: time="2025-12-13T15:10:21.509437076Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-562018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 15:12:27.047975   25225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:12:27.048693   25225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:12:27.050421   25225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:12:27.051021   25225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:12:27.052713   25225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.759368] overlayfs: idmapped layers are currently not supported
	[Dec13 13:37] overlayfs: idmapped layers are currently not supported
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 15:12:27 up  6:54,  0 user,  load average: 0.27, 0.35, 0.47
	Linux functional-562018 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 15:12:23 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:12:24 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 663.
	Dec 13 15:12:24 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:12:24 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:12:24 functional-562018 kubelet[25092]: E1213 15:12:24.377134   25092 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:12:24 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:12:24 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:12:25 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 664.
	Dec 13 15:12:25 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:12:25 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:12:25 functional-562018 kubelet[25098]: E1213 15:12:25.138210   25098 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:12:25 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:12:25 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:12:25 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 665.
	Dec 13 15:12:25 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:12:25 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:12:25 functional-562018 kubelet[25104]: E1213 15:12:25.906997   25104 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:12:25 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:12:25 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:12:26 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 666.
	Dec 13 15:12:26 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:12:26 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:12:26 functional-562018 kubelet[25139]: E1213 15:12:26.647235   25139 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:12:26 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:12:26 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018: exit status 2 (320.927638ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-562018" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-562018 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-562018 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (77.624942ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-562018 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-562018
helpers_test.go:244: (dbg) docker inspect functional-562018:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
	        "Created": "2025-12-13T14:41:15.451086653Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1291703,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T14:41:15.527927053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hostname",
	        "HostsPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hosts",
	        "LogPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648-json.log",
	        "Name": "/functional-562018",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-562018:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-562018",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
	                "LowerDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-562018",
	                "Source": "/var/lib/docker/volumes/functional-562018/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-562018",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-562018",
	                "name.minikube.sigs.k8s.io": "functional-562018",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f4b22297a29553cdd0dbc4eaa766abcdb1e67465ee18f1e2f5f9917dc8cf6d08",
	            "SandboxKey": "/var/run/docker/netns/f4b22297a295",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33918"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33919"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33922"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33920"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33921"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-562018": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:f3:95:ff:30:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd2e7aed753870546b4bccdeed8073ad5795c14334e906d06b964f72dc448c38",
	                    "EndpointID": "a0e947c1e40f773105c811b67b7d1d63f19d3a20060380bbde944bf9bfe39be5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-562018",
	                        "2cd1277ca783"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018: exit status 2 (417.18457ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-562018 logs -n 25: (1.228390704s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-562018 ssh sudo crictl images                                                                                                                     │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ ssh     │ functional-562018 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                           │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ ssh     │ functional-562018 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │                     │
	│ cache   │ functional-562018 cache reload                                                                                                                               │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ ssh     │ functional-562018 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │ 13 Dec 25 14:55 UTC │
	│ kubectl │ functional-562018 kubectl -- --context functional-562018 get pods                                                                                            │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │                     │
	│ start   │ -p functional-562018 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:55 UTC │                     │
	│ cp      │ functional-562018 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ config  │ functional-562018 config unset cpus                                                                                                                          │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ config  │ functional-562018 config get cpus                                                                                                                            │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │                     │
	│ config  │ functional-562018 config set cpus 2                                                                                                                          │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ config  │ functional-562018 config get cpus                                                                                                                            │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ config  │ functional-562018 config unset cpus                                                                                                                          │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ ssh     │ functional-562018 ssh -n functional-562018 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ config  │ functional-562018 config get cpus                                                                                                                            │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │                     │
	│ license │                                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ cp      │ functional-562018 cp functional-562018:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp4290011930/001/cp-test.txt │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ ssh     │ functional-562018 ssh sudo systemctl is-active docker                                                                                                        │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │                     │
	│ ssh     │ functional-562018 ssh -n functional-562018 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ ssh     │ functional-562018 ssh sudo systemctl is-active crio                                                                                                          │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │                     │
	│ cp      │ functional-562018 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ ssh     │ functional-562018 ssh -n functional-562018 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │ 13 Dec 25 15:08 UTC │
	│ image   │ functional-562018 image load --daemon kicbase/echo-server:functional-562018 --alsologtostderr                                                                │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 15:08 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 14:55:53
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 14:55:53.719613 1302865 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:55:53.719728 1302865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:55:53.719732 1302865 out.go:374] Setting ErrFile to fd 2...
	I1213 14:55:53.719735 1302865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:55:53.719985 1302865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 14:55:53.720335 1302865 out.go:368] Setting JSON to false
	I1213 14:55:53.721190 1302865 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23903,"bootTime":1765613851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 14:55:53.721260 1302865 start.go:143] virtualization:  
	I1213 14:55:53.724694 1302865 out.go:179] * [functional-562018] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 14:55:53.728380 1302865 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 14:55:53.728496 1302865 notify.go:221] Checking for updates...
	I1213 14:55:53.734124 1302865 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:55:53.736928 1302865 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:55:53.739728 1302865 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 14:55:53.742545 1302865 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 14:55:53.745302 1302865 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 14:55:53.748618 1302865 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:55:53.748719 1302865 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:55:53.782535 1302865 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 14:55:53.782649 1302865 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:55:53.845662 1302865 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 14:55:53.829246857 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:55:53.845758 1302865 docker.go:319] overlay module found
	I1213 14:55:53.849849 1302865 out.go:179] * Using the docker driver based on existing profile
	I1213 14:55:53.852762 1302865 start.go:309] selected driver: docker
	I1213 14:55:53.852774 1302865 start.go:927] validating driver "docker" against &{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:55:53.852875 1302865 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 14:55:53.852984 1302865 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:55:53.929886 1302865 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 14:55:53.921020705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:55:53.930294 1302865 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 14:55:53.930319 1302865 cni.go:84] Creating CNI manager for ""
	I1213 14:55:53.930367 1302865 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 14:55:53.930406 1302865 start.go:353] cluster config:
	{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:55:53.933662 1302865 out.go:179] * Starting "functional-562018" primary control-plane node in "functional-562018" cluster
	I1213 14:55:53.936743 1302865 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 14:55:53.939760 1302865 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 14:55:53.942676 1302865 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 14:55:53.942716 1302865 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 14:55:53.942732 1302865 cache.go:65] Caching tarball of preloaded images
	I1213 14:55:53.942759 1302865 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 14:55:53.942845 1302865 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 14:55:53.942855 1302865 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 14:55:53.942970 1302865 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/config.json ...
	I1213 14:55:53.962568 1302865 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 14:55:53.962579 1302865 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 14:55:53.962597 1302865 cache.go:243] Successfully downloaded all kic artifacts
	I1213 14:55:53.962628 1302865 start.go:360] acquireMachinesLock for functional-562018: {Name:mk6a7956e4fce5d8e0f4d6fe039ab67ad6cd688b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 14:55:53.962689 1302865 start.go:364] duration metric: took 45.029µs to acquireMachinesLock for "functional-562018"
	I1213 14:55:53.962707 1302865 start.go:96] Skipping create...Using existing machine configuration
	I1213 14:55:53.962711 1302865 fix.go:54] fixHost starting: 
	I1213 14:55:53.962972 1302865 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
	I1213 14:55:53.980087 1302865 fix.go:112] recreateIfNeeded on functional-562018: state=Running err=<nil>
	W1213 14:55:53.980106 1302865 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 14:55:53.983261 1302865 out.go:252] * Updating the running docker "functional-562018" container ...
	I1213 14:55:53.983285 1302865 machine.go:94] provisionDockerMachine start ...
	I1213 14:55:53.983388 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.000833 1302865 main.go:143] libmachine: Using SSH client type: native
	I1213 14:55:54.001170 1302865 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:55:54.001177 1302865 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 14:55:54.155013 1302865 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-562018
	
	I1213 14:55:54.155027 1302865 ubuntu.go:182] provisioning hostname "functional-562018"
	I1213 14:55:54.155091 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.172804 1302865 main.go:143] libmachine: Using SSH client type: native
	I1213 14:55:54.173100 1302865 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:55:54.173108 1302865 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-562018 && echo "functional-562018" | sudo tee /etc/hostname
	I1213 14:55:54.335232 1302865 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-562018
	
	I1213 14:55:54.335302 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.353315 1302865 main.go:143] libmachine: Using SSH client type: native
	I1213 14:55:54.353625 1302865 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1213 14:55:54.353638 1302865 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-562018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-562018/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-562018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 14:55:54.503602 1302865 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 14:55:54.503618 1302865 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 14:55:54.503648 1302865 ubuntu.go:190] setting up certificates
	I1213 14:55:54.503664 1302865 provision.go:84] configureAuth start
	I1213 14:55:54.503732 1302865 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
	I1213 14:55:54.520737 1302865 provision.go:143] copyHostCerts
	I1213 14:55:54.520806 1302865 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 14:55:54.520813 1302865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 14:55:54.520892 1302865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 14:55:54.520992 1302865 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 14:55:54.520996 1302865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 14:55:54.521022 1302865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 14:55:54.521079 1302865 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 14:55:54.521082 1302865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 14:55:54.521105 1302865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 14:55:54.521157 1302865 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.functional-562018 san=[127.0.0.1 192.168.49.2 functional-562018 localhost minikube]
	I1213 14:55:54.737947 1302865 provision.go:177] copyRemoteCerts
	I1213 14:55:54.738007 1302865 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 14:55:54.738047 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.756271 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:54.864730 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 14:55:54.885080 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 14:55:54.903456 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 14:55:54.921228 1302865 provision.go:87] duration metric: took 417.552003ms to configureAuth
	I1213 14:55:54.921245 1302865 ubuntu.go:206] setting minikube options for container-runtime
	I1213 14:55:54.921445 1302865 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 14:55:54.921451 1302865 machine.go:97] duration metric: took 938.161957ms to provisionDockerMachine
	I1213 14:55:54.921458 1302865 start.go:293] postStartSetup for "functional-562018" (driver="docker")
	I1213 14:55:54.921469 1302865 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 14:55:54.921526 1302865 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 14:55:54.921569 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:54.939146 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:55.043619 1302865 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 14:55:55.047116 1302865 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 14:55:55.047136 1302865 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 14:55:55.047147 1302865 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 14:55:55.047201 1302865 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 14:55:55.047279 1302865 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 14:55:55.047377 1302865 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts -> hosts in /etc/test/nested/copy/1252934
	I1213 14:55:55.047422 1302865 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1252934
	I1213 14:55:55.055022 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 14:55:55.072651 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts --> /etc/test/nested/copy/1252934/hosts (40 bytes)
	I1213 14:55:55.090146 1302865 start.go:296] duration metric: took 168.672467ms for postStartSetup
	I1213 14:55:55.090222 1302865 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 14:55:55.090277 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:55.110519 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:55.212743 1302865 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 14:55:55.217665 1302865 fix.go:56] duration metric: took 1.254946074s for fixHost
	I1213 14:55:55.217694 1302865 start.go:83] releasing machines lock for "functional-562018", held for 1.254985507s
	I1213 14:55:55.217771 1302865 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
	I1213 14:55:55.234536 1302865 ssh_runner.go:195] Run: cat /version.json
	I1213 14:55:55.234580 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:55.234841 1302865 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 14:55:55.234904 1302865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
	I1213 14:55:55.258034 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:55.263005 1302865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
	I1213 14:55:55.363489 1302865 ssh_runner.go:195] Run: systemctl --version
	I1213 14:55:55.466608 1302865 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 14:55:55.470983 1302865 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 14:55:55.471044 1302865 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 14:55:55.478685 1302865 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 14:55:55.478700 1302865 start.go:496] detecting cgroup driver to use...
	I1213 14:55:55.478730 1302865 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 14:55:55.478776 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 14:55:55.494349 1302865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 14:55:55.507276 1302865 docker.go:218] disabling cri-docker service (if available) ...
	I1213 14:55:55.507360 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 14:55:55.523374 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 14:55:55.537388 1302865 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 14:55:55.656533 1302865 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 14:55:55.769801 1302865 docker.go:234] disabling docker service ...
	I1213 14:55:55.769857 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 14:55:55.784548 1302865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 14:55:55.797129 1302865 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 14:55:55.915684 1302865 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 14:55:56.027646 1302865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 14:55:56.050399 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 14:55:56.066005 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 14:55:56.076093 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 14:55:56.085556 1302865 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 14:55:56.085627 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 14:55:56.094545 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 14:55:56.104197 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 14:55:56.114269 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 14:55:56.123172 1302865 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 14:55:56.132178 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 14:55:56.141074 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 14:55:56.150470 1302865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 14:55:56.160063 1302865 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 14:55:56.167903 1302865 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 14:55:56.175659 1302865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:55:56.295844 1302865 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 14:55:56.441580 1302865 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 14:55:56.441654 1302865 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 14:55:56.445551 1302865 start.go:564] Will wait 60s for crictl version
	I1213 14:55:56.445607 1302865 ssh_runner.go:195] Run: which crictl
	I1213 14:55:56.449128 1302865 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 14:55:56.473587 1302865 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 14:55:56.473654 1302865 ssh_runner.go:195] Run: containerd --version
	I1213 14:55:56.493885 1302865 ssh_runner.go:195] Run: containerd --version
	I1213 14:55:56.518032 1302865 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 14:55:56.521077 1302865 cli_runner.go:164] Run: docker network inspect functional-562018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 14:55:56.537369 1302865 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 14:55:56.544433 1302865 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 14:55:56.547248 1302865 kubeadm.go:884] updating cluster {Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 14:55:56.547410 1302865 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 14:55:56.547500 1302865 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:55:56.572443 1302865 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 14:55:56.572458 1302865 containerd.go:534] Images already preloaded, skipping extraction
	I1213 14:55:56.572525 1302865 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 14:55:56.603700 1302865 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 14:55:56.603712 1302865 cache_images.go:86] Images are preloaded, skipping loading
	I1213 14:55:56.603718 1302865 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 14:55:56.603824 1302865 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-562018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 14:55:56.603888 1302865 ssh_runner.go:195] Run: sudo crictl info
	I1213 14:55:56.640969 1302865 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 14:55:56.640988 1302865 cni.go:84] Creating CNI manager for ""
	I1213 14:55:56.640997 1302865 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 14:55:56.641011 1302865 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 14:55:56.641033 1302865 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-562018 NodeName:functional-562018 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 14:55:56.641163 1302865 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-562018"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 14:55:56.641238 1302865 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 14:55:56.649442 1302865 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 14:55:56.649507 1302865 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 14:55:56.657006 1302865 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 14:55:56.669728 1302865 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 14:55:56.682334 1302865 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1213 14:55:56.694926 1302865 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 14:55:56.698838 1302865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 14:55:56.837238 1302865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 14:55:57.584722 1302865 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018 for IP: 192.168.49.2
	I1213 14:55:57.584733 1302865 certs.go:195] generating shared ca certs ...
	I1213 14:55:57.584753 1302865 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:55:57.584897 1302865 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 14:55:57.584947 1302865 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 14:55:57.584954 1302865 certs.go:257] generating profile certs ...
	I1213 14:55:57.585039 1302865 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key
	I1213 14:55:57.585090 1302865 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key.d0505aee
	I1213 14:55:57.585124 1302865 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key
	I1213 14:55:57.585235 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 14:55:57.585272 1302865 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 14:55:57.585280 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 14:55:57.585307 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 14:55:57.585330 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 14:55:57.585354 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 14:55:57.585399 1302865 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 14:55:57.591362 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 14:55:57.616349 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 14:55:57.635438 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 14:55:57.655371 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 14:55:57.672503 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 14:55:57.689594 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 14:55:57.706530 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 14:55:57.723556 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 14:55:57.740287 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 14:55:57.757304 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 14:55:57.774649 1302865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 14:55:57.792687 1302865 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 14:55:57.805822 1302865 ssh_runner.go:195] Run: openssl version
	I1213 14:55:57.812225 1302865 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 14:55:57.819503 1302865 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 14:55:57.826726 1302865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 14:55:57.830446 1302865 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 14:55:57.830502 1302865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 14:55:57.871253 1302865 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 14:55:57.878814 1302865 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:55:57.886029 1302865 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 14:55:57.893560 1302865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:55:57.897283 1302865 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:55:57.897343 1302865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 14:55:57.938225 1302865 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 14:55:57.946132 1302865 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 14:55:57.953318 1302865 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 14:55:57.960779 1302865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 14:55:57.964616 1302865 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 14:55:57.964674 1302865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 14:55:58.013928 1302865 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 14:55:58.021993 1302865 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 14:55:58.026144 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 14:55:58.067380 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 14:55:58.114887 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 14:55:58.156572 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 14:55:58.199117 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 14:55:58.241809 1302865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 14:55:58.285184 1302865 kubeadm.go:401] StartCluster: {Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:55:58.285266 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 14:55:58.285327 1302865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 14:55:58.314259 1302865 cri.go:89] found id: ""
	I1213 14:55:58.314322 1302865 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 14:55:58.322386 1302865 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 14:55:58.322396 1302865 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 14:55:58.322453 1302865 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 14:55:58.329880 1302865 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:55:58.330377 1302865 kubeconfig.go:125] found "functional-562018" server: "https://192.168.49.2:8441"
	I1213 14:55:58.331729 1302865 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 14:55:58.341644 1302865 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 14:41:23.876598830 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 14:55:56.689854034 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 14:55:58.341663 1302865 kubeadm.go:1161] stopping kube-system containers ...
	I1213 14:55:58.341678 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1213 14:55:58.341741 1302865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 14:55:58.374972 1302865 cri.go:89] found id: ""
	I1213 14:55:58.375050 1302865 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 14:55:58.396016 1302865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 14:55:58.404525 1302865 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 13 14:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 13 14:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 13 14:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 13 14:45 /etc/kubernetes/scheduler.conf
	
	I1213 14:55:58.404584 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 14:55:58.412946 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 14:55:58.420580 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:55:58.420635 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 14:55:58.428221 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 14:55:58.435971 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:55:58.436028 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 14:55:58.443530 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 14:55:58.451393 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 14:55:58.451448 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 14:55:58.458854 1302865 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 14:55:58.466605 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:55:58.520413 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:55:59.744405 1302865 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.223964216s)
	I1213 14:55:59.744467 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:55:59.946438 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:56:00.013725 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 14:56:00.113319 1302865 api_server.go:52] waiting for apiserver process to appear ...
	I1213 14:56:00.114955 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:00.613579 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:01.114177 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:01.613592 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:02.113571 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:02.613593 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:03.113840 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:03.613569 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:04.114249 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:04.613852 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:05.113537 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:05.613696 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:06.113540 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:06.614342 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:07.113785 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:07.613457 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:08.114283 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:08.613596 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:09.113597 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:09.614352 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:10.114532 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:10.613598 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:11.114365 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:11.614158 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:12.113539 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:12.613531 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:13.113591 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:13.613463 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:14.114527 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:14.614435 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:15.113510 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:15.614373 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:16.114388 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:16.613507 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:17.113567 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:17.614369 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:18.113844 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:18.613714 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:19.114404 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:19.614169 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:20.114541 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:20.613650 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:21.113498 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:21.613589 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:22.113591 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:22.613522 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:23.114240 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:23.614475 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:24.113893 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:24.614467 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:25.114531 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:25.613526 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:26.114346 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:26.614504 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:27.113518 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:27.614286 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:28.114181 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:28.613958 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:29.113601 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:29.614343 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:30.114309 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:30.614109 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:31.114271 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:31.613510 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:32.114261 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:32.614199 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:33.114060 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:33.614237 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:34.114371 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:34.614467 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:35.114182 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:35.613614 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:36.113542 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:36.614402 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:37.114233 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:37.613522 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:38.113599 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:38.613584 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:39.114045 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:39.613569 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:40.113521 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:40.613504 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:41.113503 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:41.614239 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:42.113697 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:42.614293 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:43.113591 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:43.614231 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:44.114413 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:44.614537 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:45.114187 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:45.613592 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:46.113667 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:46.613755 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:47.113597 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:47.614262 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:48.113463 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:48.613700 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:49.113578 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:49.614192 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:50.113501 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:50.613492 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:51.114160 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:51.613924 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:52.114491 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:52.613532 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:53.113608 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:53.613620 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:54.114432 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:54.614359 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:55.114461 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:55.614143 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:56.113587 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:56.614451 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:57.113619 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:57.613622 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:58.113547 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:58.614429 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:59.113617 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:56:59.613534 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:00.124126 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:00.124233 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:00.200982 1302865 cri.go:89] found id: ""
	I1213 14:57:00.201003 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.201011 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:00.201018 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:00.201100 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:00.237755 1302865 cri.go:89] found id: ""
	I1213 14:57:00.237770 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.237778 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:00.237783 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:00.237861 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:00.301679 1302865 cri.go:89] found id: ""
	I1213 14:57:00.301694 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.301702 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:00.301709 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:00.301778 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:00.347228 1302865 cri.go:89] found id: ""
	I1213 14:57:00.347243 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.347251 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:00.347256 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:00.347356 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:00.376454 1302865 cri.go:89] found id: ""
	I1213 14:57:00.376471 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.376479 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:00.376485 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:00.376555 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:00.408967 1302865 cri.go:89] found id: ""
	I1213 14:57:00.408982 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.408989 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:00.408995 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:00.409059 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:00.437494 1302865 cri.go:89] found id: ""
	I1213 14:57:00.437509 1302865 logs.go:282] 0 containers: []
	W1213 14:57:00.437516 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:00.437524 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:00.437534 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:00.493840 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:00.493860 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:00.511767 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:00.511785 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:00.579231 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:00.570332   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.571045   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.572740   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.573345   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.575117   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:00.570332   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.571045   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.572740   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.573345   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:00.575117   10762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:00.579242 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:00.579253 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:00.641446 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:00.641467 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:03.171486 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:03.181873 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:03.181935 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:03.212211 1302865 cri.go:89] found id: ""
	I1213 14:57:03.212226 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.212232 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:03.212244 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:03.212304 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:03.237934 1302865 cri.go:89] found id: ""
	I1213 14:57:03.237949 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.237957 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:03.237962 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:03.238034 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:03.263822 1302865 cri.go:89] found id: ""
	I1213 14:57:03.263836 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.263843 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:03.263848 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:03.263910 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:03.289876 1302865 cri.go:89] found id: ""
	I1213 14:57:03.289890 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.289898 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:03.289902 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:03.289965 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:03.317957 1302865 cri.go:89] found id: ""
	I1213 14:57:03.317972 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.317979 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:03.318000 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:03.318060 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:03.346780 1302865 cri.go:89] found id: ""
	I1213 14:57:03.346793 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.346800 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:03.346805 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:03.346864 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:03.371472 1302865 cri.go:89] found id: ""
	I1213 14:57:03.371485 1302865 logs.go:282] 0 containers: []
	W1213 14:57:03.371493 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:03.371501 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:03.371512 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:03.399569 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:03.399588 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:03.454307 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:03.454327 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:03.472933 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:03.472951 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:03.538528 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:03.529465   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.530241   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.532007   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.532517   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.534246   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:03.529465   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.530241   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.532007   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.532517   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:03.534246   10880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:03.538539 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:03.538550 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:06.101738 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:06.112716 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:06.112778 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:06.139740 1302865 cri.go:89] found id: ""
	I1213 14:57:06.139753 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.139759 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:06.139770 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:06.139831 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:06.169906 1302865 cri.go:89] found id: ""
	I1213 14:57:06.169920 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.169927 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:06.169932 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:06.169993 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:06.194468 1302865 cri.go:89] found id: ""
	I1213 14:57:06.194482 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.194492 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:06.194497 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:06.194556 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:06.219346 1302865 cri.go:89] found id: ""
	I1213 14:57:06.219360 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.219367 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:06.219372 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:06.219466 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:06.244844 1302865 cri.go:89] found id: ""
	I1213 14:57:06.244858 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.244865 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:06.244870 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:06.244928 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:06.269412 1302865 cri.go:89] found id: ""
	I1213 14:57:06.269425 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.269433 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:06.269438 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:06.269498 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:06.293947 1302865 cri.go:89] found id: ""
	I1213 14:57:06.293960 1302865 logs.go:282] 0 containers: []
	W1213 14:57:06.293967 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:06.293975 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:06.293991 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:06.320232 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:06.320249 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:06.375210 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:06.375229 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:06.392065 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:06.392081 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:06.457910 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:06.449858   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.450478   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.452122   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.452601   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.454099   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:06.449858   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.450478   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.452122   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.452601   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:06.454099   10985 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:06.457920 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:06.457931 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:09.020376 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:09.030584 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:09.030644 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:09.057441 1302865 cri.go:89] found id: ""
	I1213 14:57:09.057455 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.057462 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:09.057467 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:09.057529 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:09.091252 1302865 cri.go:89] found id: ""
	I1213 14:57:09.091266 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.091273 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:09.091277 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:09.091357 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:09.133954 1302865 cri.go:89] found id: ""
	I1213 14:57:09.133969 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.133976 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:09.133981 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:09.134041 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:09.161351 1302865 cri.go:89] found id: ""
	I1213 14:57:09.161365 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.161372 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:09.161386 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:09.161449 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:09.186493 1302865 cri.go:89] found id: ""
	I1213 14:57:09.186507 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.186515 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:09.186519 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:09.186579 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:09.210752 1302865 cri.go:89] found id: ""
	I1213 14:57:09.210766 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.210774 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:09.210779 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:09.210841 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:09.235216 1302865 cri.go:89] found id: ""
	I1213 14:57:09.235231 1302865 logs.go:282] 0 containers: []
	W1213 14:57:09.235238 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:09.235246 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:09.235256 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:09.290010 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:09.290030 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:09.307105 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:09.307122 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:09.373837 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:09.365574   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.366312   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.368046   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.368412   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.369910   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:09.365574   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.366312   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.368046   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.368412   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:09.369910   11079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:09.373848 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:09.373862 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:09.435916 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:09.435937 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:11.968947 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:11.978917 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:11.978976 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:12.003367 1302865 cri.go:89] found id: ""
	I1213 14:57:12.003387 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.003395 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:12.003401 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:12.003472 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:12.030862 1302865 cri.go:89] found id: ""
	I1213 14:57:12.030876 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.030883 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:12.030889 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:12.030947 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:12.055991 1302865 cri.go:89] found id: ""
	I1213 14:57:12.056006 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.056014 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:12.056020 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:12.056078 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:12.088685 1302865 cri.go:89] found id: ""
	I1213 14:57:12.088699 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.088706 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:12.088711 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:12.088771 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:12.119175 1302865 cri.go:89] found id: ""
	I1213 14:57:12.119199 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.119206 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:12.119212 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:12.119276 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:12.148170 1302865 cri.go:89] found id: ""
	I1213 14:57:12.148192 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.148199 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:12.148204 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:12.148276 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:12.173907 1302865 cri.go:89] found id: ""
	I1213 14:57:12.173929 1302865 logs.go:282] 0 containers: []
	W1213 14:57:12.173936 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:12.173944 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:12.173955 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:12.230024 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:12.230044 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:12.249202 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:12.249219 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:12.317257 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:12.307463   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.308131   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.309930   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.310545   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.312207   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:12.307463   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.308131   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.309930   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.310545   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:12.312207   11180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:12.317267 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:12.317284 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:12.384433 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:12.384455 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:14.917091 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:14.927788 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:14.927850 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:14.953190 1302865 cri.go:89] found id: ""
	I1213 14:57:14.953205 1302865 logs.go:282] 0 containers: []
	W1213 14:57:14.953212 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:14.953226 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:14.953289 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:14.978043 1302865 cri.go:89] found id: ""
	I1213 14:57:14.978068 1302865 logs.go:282] 0 containers: []
	W1213 14:57:14.978075 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:14.978081 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:14.978175 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:15.004731 1302865 cri.go:89] found id: ""
	I1213 14:57:15.004749 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.004756 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:15.004761 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:15.004846 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:15.048669 1302865 cri.go:89] found id: ""
	I1213 14:57:15.048685 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.048693 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:15.048698 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:15.048777 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:15.085505 1302865 cri.go:89] found id: ""
	I1213 14:57:15.085520 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.085528 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:15.085534 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:15.085607 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:15.124753 1302865 cri.go:89] found id: ""
	I1213 14:57:15.124776 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.124784 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:15.124790 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:15.124860 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:15.168668 1302865 cri.go:89] found id: ""
	I1213 14:57:15.168682 1302865 logs.go:282] 0 containers: []
	W1213 14:57:15.168690 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:15.168698 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:15.168720 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:15.236878 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:15.228546   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.229113   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.230853   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.231344   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.232993   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:15.228546   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.229113   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.230853   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.231344   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:15.232993   11279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:15.236889 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:15.236899 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:15.299331 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:15.299361 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:15.331125 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:15.331142 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:15.391451 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:15.391478 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:17.910179 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:17.920514 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:17.920590 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:17.945066 1302865 cri.go:89] found id: ""
	I1213 14:57:17.945081 1302865 logs.go:282] 0 containers: []
	W1213 14:57:17.945088 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:17.945094 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:17.945152 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:17.972856 1302865 cri.go:89] found id: ""
	I1213 14:57:17.972870 1302865 logs.go:282] 0 containers: []
	W1213 14:57:17.972878 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:17.972882 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:17.972944 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:17.999205 1302865 cri.go:89] found id: ""
	I1213 14:57:17.999219 1302865 logs.go:282] 0 containers: []
	W1213 14:57:17.999226 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:17.999231 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:17.999288 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:18.034164 1302865 cri.go:89] found id: ""
	I1213 14:57:18.034178 1302865 logs.go:282] 0 containers: []
	W1213 14:57:18.034185 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:18.034190 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:18.034255 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:18.060346 1302865 cri.go:89] found id: ""
	I1213 14:57:18.060361 1302865 logs.go:282] 0 containers: []
	W1213 14:57:18.060368 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:18.060373 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:18.060438 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:18.089688 1302865 cri.go:89] found id: ""
	I1213 14:57:18.089702 1302865 logs.go:282] 0 containers: []
	W1213 14:57:18.089710 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:18.089718 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:18.089780 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:18.128859 1302865 cri.go:89] found id: ""
	I1213 14:57:18.128874 1302865 logs.go:282] 0 containers: []
	W1213 14:57:18.128881 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:18.128889 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:18.128899 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:18.188820 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:18.188842 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:18.206229 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:18.206247 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:18.277989 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:18.269084   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.269641   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.271489   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.272150   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.273959   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:18.269084   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.269641   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.271489   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.272150   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:18.273959   11386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:18.277999 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:18.278009 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:18.339945 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:18.339965 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:20.869114 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:20.879800 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:20.879866 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:20.905760 1302865 cri.go:89] found id: ""
	I1213 14:57:20.905774 1302865 logs.go:282] 0 containers: []
	W1213 14:57:20.905781 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:20.905786 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:20.905849 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:20.931353 1302865 cri.go:89] found id: ""
	I1213 14:57:20.931367 1302865 logs.go:282] 0 containers: []
	W1213 14:57:20.931374 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:20.931379 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:20.931445 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:20.956682 1302865 cri.go:89] found id: ""
	I1213 14:57:20.956696 1302865 logs.go:282] 0 containers: []
	W1213 14:57:20.956704 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:20.956709 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:20.956769 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:20.980824 1302865 cri.go:89] found id: ""
	I1213 14:57:20.980838 1302865 logs.go:282] 0 containers: []
	W1213 14:57:20.980845 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:20.980850 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:20.980909 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:21.008951 1302865 cri.go:89] found id: ""
	I1213 14:57:21.008974 1302865 logs.go:282] 0 containers: []
	W1213 14:57:21.008982 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:21.008987 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:21.009058 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:21.038190 1302865 cri.go:89] found id: ""
	I1213 14:57:21.038204 1302865 logs.go:282] 0 containers: []
	W1213 14:57:21.038211 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:21.038216 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:21.038277 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:21.063608 1302865 cri.go:89] found id: ""
	I1213 14:57:21.063622 1302865 logs.go:282] 0 containers: []
	W1213 14:57:21.063630 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:21.063638 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:21.063648 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:21.132089 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:21.132109 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:21.171889 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:21.171908 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:21.230786 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:21.230806 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:21.247733 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:21.247753 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:21.318785 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:21.309659   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.310476   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.312082   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.312759   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.314434   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:21.309659   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.310476   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.312082   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.312759   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:21.314434   11504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:23.819828 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:23.830541 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:23.830604 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:23.853826 1302865 cri.go:89] found id: ""
	I1213 14:57:23.853840 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.853856 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:23.853862 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:23.853933 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:23.879146 1302865 cri.go:89] found id: ""
	I1213 14:57:23.879169 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.879177 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:23.879182 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:23.879253 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:23.904357 1302865 cri.go:89] found id: ""
	I1213 14:57:23.904371 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.904379 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:23.904384 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:23.904450 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:23.929036 1302865 cri.go:89] found id: ""
	I1213 14:57:23.929050 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.929058 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:23.929063 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:23.929124 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:23.954748 1302865 cri.go:89] found id: ""
	I1213 14:57:23.954762 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.954779 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:23.954785 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:23.954854 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:23.979661 1302865 cri.go:89] found id: ""
	I1213 14:57:23.979676 1302865 logs.go:282] 0 containers: []
	W1213 14:57:23.979683 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:23.979687 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:23.979750 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:24.009902 1302865 cri.go:89] found id: ""
	I1213 14:57:24.009918 1302865 logs.go:282] 0 containers: []
	W1213 14:57:24.009925 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:24.009935 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:24.009946 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:24.079943 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:24.070877   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.071501   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.073325   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.073881   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.075497   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:24.070877   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.071501   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.073325   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.073881   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:24.075497   11585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:24.079954 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:24.079966 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:24.144015 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:24.144037 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:24.174637 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:24.174654 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:24.235392 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:24.235413 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:26.753238 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:26.763339 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:26.763404 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:26.788474 1302865 cri.go:89] found id: ""
	I1213 14:57:26.788487 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.788494 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:26.788499 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:26.788559 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:26.814440 1302865 cri.go:89] found id: ""
	I1213 14:57:26.814454 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.814461 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:26.814466 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:26.814524 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:26.841795 1302865 cri.go:89] found id: ""
	I1213 14:57:26.841809 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.841816 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:26.841821 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:26.841880 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:26.869399 1302865 cri.go:89] found id: ""
	I1213 14:57:26.869413 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.869420 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:26.869425 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:26.869482 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:26.892445 1302865 cri.go:89] found id: ""
	I1213 14:57:26.892459 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.892467 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:26.892472 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:26.892535 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:26.916537 1302865 cri.go:89] found id: ""
	I1213 14:57:26.916558 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.916565 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:26.916570 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:26.916639 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:26.940628 1302865 cri.go:89] found id: ""
	I1213 14:57:26.940650 1302865 logs.go:282] 0 containers: []
	W1213 14:57:26.940658 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:26.940671 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:26.940681 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:26.969808 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:26.969827 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:27.025191 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:27.025211 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:27.042465 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:27.042482 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:27.122593 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:27.114680   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.115527   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.117159   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.117448   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.118857   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:27.114680   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.115527   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.117159   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.117448   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:27.118857   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:27.122618 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:27.122628 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:29.693191 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:29.703585 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:29.703652 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:29.732578 1302865 cri.go:89] found id: ""
	I1213 14:57:29.732593 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.732614 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:29.732621 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:29.732686 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:29.757517 1302865 cri.go:89] found id: ""
	I1213 14:57:29.757531 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.757538 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:29.757543 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:29.757604 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:29.785456 1302865 cri.go:89] found id: ""
	I1213 14:57:29.785470 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.785476 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:29.785482 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:29.785544 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:29.809997 1302865 cri.go:89] found id: ""
	I1213 14:57:29.810011 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.810018 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:29.810023 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:29.810085 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:29.834277 1302865 cri.go:89] found id: ""
	I1213 14:57:29.834292 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.834299 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:29.834304 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:29.834366 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:29.858653 1302865 cri.go:89] found id: ""
	I1213 14:57:29.858667 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.858675 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:29.858686 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:29.858749 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:29.884435 1302865 cri.go:89] found id: ""
	I1213 14:57:29.884450 1302865 logs.go:282] 0 containers: []
	W1213 14:57:29.884456 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:29.884464 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:29.884477 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:29.911338 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:29.911356 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:29.966819 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:29.966838 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:29.985125 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:29.985141 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:30.070789 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:30.059761   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.060525   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.063144   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.063902   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.066109   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:30.059761   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.060525   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.063144   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.063902   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:30.066109   11818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:30.070800 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:30.070811 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:32.643832 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:32.654329 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:32.654399 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:32.687375 1302865 cri.go:89] found id: ""
	I1213 14:57:32.687390 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.687398 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:32.687403 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:32.687465 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:32.712437 1302865 cri.go:89] found id: ""
	I1213 14:57:32.712452 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.712460 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:32.712465 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:32.712529 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:32.738220 1302865 cri.go:89] found id: ""
	I1213 14:57:32.738234 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.738241 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:32.738247 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:32.738310 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:32.763211 1302865 cri.go:89] found id: ""
	I1213 14:57:32.763226 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.763233 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:32.763238 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:32.763299 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:32.789049 1302865 cri.go:89] found id: ""
	I1213 14:57:32.789063 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.789071 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:32.789077 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:32.789141 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:32.815194 1302865 cri.go:89] found id: ""
	I1213 14:57:32.815208 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.815215 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:32.815221 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:32.815284 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:32.840629 1302865 cri.go:89] found id: ""
	I1213 14:57:32.840646 1302865 logs.go:282] 0 containers: []
	W1213 14:57:32.840653 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:32.840661 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:32.840672 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:32.868556 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:32.868574 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:32.923451 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:32.923472 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:32.940492 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:32.940508 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:33.014646 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:33.000281   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.001080   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.003301   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.004000   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.006154   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:33.000281   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.001080   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.003301   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.004000   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:33.006154   11923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:33.014656 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:33.014680 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:35.576582 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:35.586876 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:35.586939 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:35.612619 1302865 cri.go:89] found id: ""
	I1213 14:57:35.612634 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.612641 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:35.612646 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:35.612714 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:35.637275 1302865 cri.go:89] found id: ""
	I1213 14:57:35.637289 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.637296 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:35.637302 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:35.637363 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:35.661936 1302865 cri.go:89] found id: ""
	I1213 14:57:35.661950 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.661957 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:35.661962 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:35.662035 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:35.691702 1302865 cri.go:89] found id: ""
	I1213 14:57:35.691716 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.691722 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:35.691727 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:35.691789 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:35.719594 1302865 cri.go:89] found id: ""
	I1213 14:57:35.719608 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.719614 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:35.719619 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:35.719685 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:35.747602 1302865 cri.go:89] found id: ""
	I1213 14:57:35.747617 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.747624 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:35.747629 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:35.747690 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:35.772489 1302865 cri.go:89] found id: ""
	I1213 14:57:35.772503 1302865 logs.go:282] 0 containers: []
	W1213 14:57:35.772510 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:35.772517 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:35.772534 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:35.801457 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:35.801474 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:35.859688 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:35.859708 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:35.877069 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:35.877087 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:35.942565 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:35.934502   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.935224   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.936888   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.937197   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.938690   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:35.934502   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.935224   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.936888   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.937197   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:35.938690   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:35.942576 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:35.942595 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:38.506862 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:38.517509 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:38.517575 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:38.542481 1302865 cri.go:89] found id: ""
	I1213 14:57:38.542496 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.542512 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:38.542517 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:38.542586 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:38.567177 1302865 cri.go:89] found id: ""
	I1213 14:57:38.567191 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.567198 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:38.567202 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:38.567264 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:38.591952 1302865 cri.go:89] found id: ""
	I1213 14:57:38.591967 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.591974 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:38.591979 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:38.592036 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:38.615589 1302865 cri.go:89] found id: ""
	I1213 14:57:38.615604 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.615619 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:38.615625 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:38.615697 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:38.641025 1302865 cri.go:89] found id: ""
	I1213 14:57:38.641039 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.641046 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:38.641051 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:38.641115 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:38.666245 1302865 cri.go:89] found id: ""
	I1213 14:57:38.666259 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.666276 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:38.666282 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:38.666355 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:38.691177 1302865 cri.go:89] found id: ""
	I1213 14:57:38.691191 1302865 logs.go:282] 0 containers: []
	W1213 14:57:38.691198 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:38.691206 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:38.691217 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:38.748984 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:38.749004 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:38.765774 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:38.765791 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:38.833656 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:38.825292   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.825956   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.827650   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.828246   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.829830   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:38.825292   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.825956   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.827650   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.828246   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:38.829830   12124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:38.833672 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:38.833683 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:38.895503 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:38.895524 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:41.424760 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:41.435082 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:41.435154 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:41.460250 1302865 cri.go:89] found id: ""
	I1213 14:57:41.460265 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.460273 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:41.460278 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:41.460338 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:41.490003 1302865 cri.go:89] found id: ""
	I1213 14:57:41.490017 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.490024 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:41.490029 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:41.490094 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:41.515086 1302865 cri.go:89] found id: ""
	I1213 14:57:41.515100 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.515107 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:41.515112 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:41.515173 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:41.540169 1302865 cri.go:89] found id: ""
	I1213 14:57:41.540183 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.540205 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:41.540211 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:41.540279 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:41.564345 1302865 cri.go:89] found id: ""
	I1213 14:57:41.564358 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.564365 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:41.564370 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:41.564429 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:41.589001 1302865 cri.go:89] found id: ""
	I1213 14:57:41.589015 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.589022 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:41.589027 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:41.589091 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:41.617434 1302865 cri.go:89] found id: ""
	I1213 14:57:41.617447 1302865 logs.go:282] 0 containers: []
	W1213 14:57:41.617455 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:41.617462 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:41.617471 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:41.683384 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:41.683411 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:41.711592 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:41.711611 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:41.769286 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:41.769305 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:41.786199 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:41.786219 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:41.854485 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:41.846163   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.846886   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.848432   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.848940   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.850514   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:41.846163   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.846886   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.848432   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.848940   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:41.850514   12243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:44.355606 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:44.369969 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:44.370032 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:44.401460 1302865 cri.go:89] found id: ""
	I1213 14:57:44.401474 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.401481 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:44.401486 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:44.401548 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:44.431513 1302865 cri.go:89] found id: ""
	I1213 14:57:44.431527 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.431534 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:44.431539 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:44.431600 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:44.457242 1302865 cri.go:89] found id: ""
	I1213 14:57:44.457256 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.457263 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:44.457268 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:44.457329 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:44.482224 1302865 cri.go:89] found id: ""
	I1213 14:57:44.482238 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.482245 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:44.482250 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:44.482313 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:44.509856 1302865 cri.go:89] found id: ""
	I1213 14:57:44.509871 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.509878 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:44.509884 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:44.509950 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:44.533977 1302865 cri.go:89] found id: ""
	I1213 14:57:44.533992 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.533999 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:44.534005 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:44.534069 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:44.562015 1302865 cri.go:89] found id: ""
	I1213 14:57:44.562029 1302865 logs.go:282] 0 containers: []
	W1213 14:57:44.562036 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:44.562044 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:44.562055 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:44.629999 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:44.621407   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.622108   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.623865   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.624500   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.626099   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:44.621407   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.622108   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.623865   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.624500   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:44.626099   12327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:44.630009 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:44.630020 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:44.697021 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:44.697042 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:44.725319 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:44.725336 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:44.783033 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:44.783053 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:47.300684 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:47.311369 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:47.311431 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:47.343773 1302865 cri.go:89] found id: ""
	I1213 14:57:47.343787 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.343794 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:47.343800 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:47.343864 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:47.373867 1302865 cri.go:89] found id: ""
	I1213 14:57:47.373881 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.373888 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:47.373893 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:47.373950 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:47.409488 1302865 cri.go:89] found id: ""
	I1213 14:57:47.409503 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.409510 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:47.409515 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:47.409576 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:47.436144 1302865 cri.go:89] found id: ""
	I1213 14:57:47.436160 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.436166 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:47.436172 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:47.436231 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:47.459642 1302865 cri.go:89] found id: ""
	I1213 14:57:47.459656 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.459664 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:47.459669 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:47.459728 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:47.488525 1302865 cri.go:89] found id: ""
	I1213 14:57:47.488539 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.488546 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:47.488589 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:47.488660 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:47.513277 1302865 cri.go:89] found id: ""
	I1213 14:57:47.513304 1302865 logs.go:282] 0 containers: []
	W1213 14:57:47.513312 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:47.513320 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:47.513333 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:47.569182 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:47.569201 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:47.586016 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:47.586033 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:47.657399 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:47.648527   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.649441   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.651289   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.651877   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.653400   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:47.648527   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.649441   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.651289   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.651877   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:47.653400   12436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:47.657410 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:47.657421 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:47.719756 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:47.719776 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:50.250366 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:50.261360 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:50.261430 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:50.285575 1302865 cri.go:89] found id: ""
	I1213 14:57:50.285588 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.285595 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:50.285601 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:50.285657 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:50.313925 1302865 cri.go:89] found id: ""
	I1213 14:57:50.313939 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.313946 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:50.313951 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:50.314025 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:50.350634 1302865 cri.go:89] found id: ""
	I1213 14:57:50.350653 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.350660 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:50.350665 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:50.350725 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:50.377901 1302865 cri.go:89] found id: ""
	I1213 14:57:50.377915 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.377922 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:50.377927 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:50.377987 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:50.408528 1302865 cri.go:89] found id: ""
	I1213 14:57:50.408550 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.408557 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:50.408562 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:50.408637 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:50.434189 1302865 cri.go:89] found id: ""
	I1213 14:57:50.434203 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.434212 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:50.434217 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:50.434275 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:50.459353 1302865 cri.go:89] found id: ""
	I1213 14:57:50.459367 1302865 logs.go:282] 0 containers: []
	W1213 14:57:50.459373 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:50.459381 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:50.459391 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:50.515565 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:50.515585 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:50.532866 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:50.532883 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:50.599094 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:50.590849   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.591681   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.593186   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.593701   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.595193   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:50.590849   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.591681   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.593186   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.593701   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:50.595193   12542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:50.599104 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:50.599115 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:50.663140 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:50.663159 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:53.200108 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:53.210621 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:53.210684 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:53.236457 1302865 cri.go:89] found id: ""
	I1213 14:57:53.236471 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.236478 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:53.236483 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:53.236545 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:53.269649 1302865 cri.go:89] found id: ""
	I1213 14:57:53.269664 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.269670 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:53.269677 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:53.269738 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:53.293759 1302865 cri.go:89] found id: ""
	I1213 14:57:53.293774 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.293781 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:53.293786 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:53.293846 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:53.318675 1302865 cri.go:89] found id: ""
	I1213 14:57:53.318690 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.318696 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:53.318701 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:53.318765 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:53.353544 1302865 cri.go:89] found id: ""
	I1213 14:57:53.353558 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.353564 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:53.353569 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:53.353630 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:53.381535 1302865 cri.go:89] found id: ""
	I1213 14:57:53.381549 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.381565 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:53.381571 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:53.381641 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:53.408473 1302865 cri.go:89] found id: ""
	I1213 14:57:53.408487 1302865 logs.go:282] 0 containers: []
	W1213 14:57:53.408494 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:53.408502 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:53.408514 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:53.463646 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:53.463670 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:53.480500 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:53.480518 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:53.545969 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:53.538062   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.538490   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.540123   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.540597   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.542043   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:53.538062   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.538490   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.540123   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.540597   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:53.542043   12648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:53.545979 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:53.545991 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:53.607729 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:53.607750 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:56.139407 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:56.150264 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:56.150335 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:56.175852 1302865 cri.go:89] found id: ""
	I1213 14:57:56.175866 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.175873 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:56.175878 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:56.175942 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:56.202887 1302865 cri.go:89] found id: ""
	I1213 14:57:56.202901 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.202908 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:56.202921 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:56.202981 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:56.229038 1302865 cri.go:89] found id: ""
	I1213 14:57:56.229053 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.229060 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:56.229065 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:56.229125 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:56.253081 1302865 cri.go:89] found id: ""
	I1213 14:57:56.253096 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.253103 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:56.253108 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:56.253172 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:56.277822 1302865 cri.go:89] found id: ""
	I1213 14:57:56.277836 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.277843 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:56.277849 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:56.277910 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:56.302419 1302865 cri.go:89] found id: ""
	I1213 14:57:56.302435 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.302442 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:56.302447 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:56.302508 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:56.327036 1302865 cri.go:89] found id: ""
	I1213 14:57:56.327050 1302865 logs.go:282] 0 containers: []
	W1213 14:57:56.327057 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:56.327066 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:56.327078 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:56.353968 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:56.353986 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:56.426915 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:56.418628   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.419173   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.420794   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.421363   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.422912   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:56.418628   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.419173   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.420794   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.421363   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:56.422912   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:56.426926 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:56.426943 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:56.488491 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:56.488513 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:56.516737 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:56.516753 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:57:59.077330 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:57:59.087745 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:57:59.087809 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:57:59.113689 1302865 cri.go:89] found id: ""
	I1213 14:57:59.113703 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.113710 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:57:59.113715 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:57:59.113774 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:57:59.138884 1302865 cri.go:89] found id: ""
	I1213 14:57:59.138898 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.138905 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:57:59.138911 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:57:59.138976 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:57:59.164226 1302865 cri.go:89] found id: ""
	I1213 14:57:59.164240 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.164246 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:57:59.164254 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:57:59.164312 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:57:59.189753 1302865 cri.go:89] found id: ""
	I1213 14:57:59.189767 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.189774 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:57:59.189779 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:57:59.189840 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:57:59.219066 1302865 cri.go:89] found id: ""
	I1213 14:57:59.219080 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.219086 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:57:59.219092 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:57:59.219152 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:57:59.243456 1302865 cri.go:89] found id: ""
	I1213 14:57:59.243470 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.243477 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:57:59.243482 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:57:59.243544 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:57:59.267676 1302865 cri.go:89] found id: ""
	I1213 14:57:59.267692 1302865 logs.go:282] 0 containers: []
	W1213 14:57:59.267699 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:57:59.267707 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:57:59.267719 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:57:59.284600 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:57:59.284617 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:57:59.356184 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:57:59.346354   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.347267   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.349163   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.349876   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.351596   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:57:59.346354   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.347267   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.349163   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.349876   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:57:59.351596   12851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:57:59.356202 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:57:59.356215 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:57:59.427513 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:57:59.427535 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:57:59.459203 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:57:59.459220 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:02.016233 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:02.027182 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:02.027246 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:02.053453 1302865 cri.go:89] found id: ""
	I1213 14:58:02.053467 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.053475 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:02.053480 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:02.053543 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:02.081288 1302865 cri.go:89] found id: ""
	I1213 14:58:02.081303 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.081310 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:02.081315 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:02.081377 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:02.106556 1302865 cri.go:89] found id: ""
	I1213 14:58:02.106572 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.106579 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:02.106585 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:02.106645 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:02.131201 1302865 cri.go:89] found id: ""
	I1213 14:58:02.131215 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.131221 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:02.131226 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:02.131286 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:02.156170 1302865 cri.go:89] found id: ""
	I1213 14:58:02.156194 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.156202 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:02.156207 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:02.156275 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:02.185059 1302865 cri.go:89] found id: ""
	I1213 14:58:02.185073 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.185080 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:02.185086 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:02.185153 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:02.209854 1302865 cri.go:89] found id: ""
	I1213 14:58:02.209870 1302865 logs.go:282] 0 containers: []
	W1213 14:58:02.209884 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:02.209893 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:02.209903 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:02.279934 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:02.270936   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.271416   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.273300   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.273902   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.275651   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:02.270936   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.271416   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.273300   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.273902   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:02.275651   12956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:02.279958 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:02.279970 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:02.341869 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:02.341888 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:02.370761 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:02.370783 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:02.431851 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:02.431869 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:04.950137 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:04.960995 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:04.961059 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:04.986243 1302865 cri.go:89] found id: ""
	I1213 14:58:04.986257 1302865 logs.go:282] 0 containers: []
	W1213 14:58:04.986264 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:04.986269 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:04.986329 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:05.016170 1302865 cri.go:89] found id: ""
	I1213 14:58:05.016192 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.016200 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:05.016206 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:05.016270 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:05.042103 1302865 cri.go:89] found id: ""
	I1213 14:58:05.042117 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.042124 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:05.042129 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:05.042188 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:05.066050 1302865 cri.go:89] found id: ""
	I1213 14:58:05.066065 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.066071 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:05.066077 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:05.066141 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:05.091600 1302865 cri.go:89] found id: ""
	I1213 14:58:05.091615 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.091623 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:05.091634 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:05.091698 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:05.117406 1302865 cri.go:89] found id: ""
	I1213 14:58:05.117420 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.117427 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:05.117432 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:05.117491 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:05.143774 1302865 cri.go:89] found id: ""
	I1213 14:58:05.143788 1302865 logs.go:282] 0 containers: []
	W1213 14:58:05.143794 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:05.143802 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:05.143823 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:05.198717 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:05.198736 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:05.216110 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:05.216127 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:05.281771 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:05.273944   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.274471   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.276069   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.276389   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.277907   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:05.273944   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.274471   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.276069   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.276389   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:05.277907   13069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:05.281792 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:05.281804 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:05.344051 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:05.344070 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:07.872032 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:07.883862 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:07.883925 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:07.908603 1302865 cri.go:89] found id: ""
	I1213 14:58:07.908616 1302865 logs.go:282] 0 containers: []
	W1213 14:58:07.908623 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:07.908628 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:07.908696 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:07.932609 1302865 cri.go:89] found id: ""
	I1213 14:58:07.932624 1302865 logs.go:282] 0 containers: []
	W1213 14:58:07.932631 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:07.932636 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:07.932729 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:07.957476 1302865 cri.go:89] found id: ""
	I1213 14:58:07.957490 1302865 logs.go:282] 0 containers: []
	W1213 14:58:07.957497 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:07.957502 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:07.957561 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:07.983994 1302865 cri.go:89] found id: ""
	I1213 14:58:07.984014 1302865 logs.go:282] 0 containers: []
	W1213 14:58:07.984022 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:07.984027 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:07.984090 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:08.016758 1302865 cri.go:89] found id: ""
	I1213 14:58:08.016772 1302865 logs.go:282] 0 containers: []
	W1213 14:58:08.016779 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:08.016784 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:08.016850 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:08.048311 1302865 cri.go:89] found id: ""
	I1213 14:58:08.048326 1302865 logs.go:282] 0 containers: []
	W1213 14:58:08.048333 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:08.048338 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:08.048404 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:08.074196 1302865 cri.go:89] found id: ""
	I1213 14:58:08.074211 1302865 logs.go:282] 0 containers: []
	W1213 14:58:08.074219 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:08.074226 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:08.074237 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:08.139046 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:08.139073 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:08.167121 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:08.167141 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:08.222634 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:08.222664 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:08.240309 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:08.240325 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:08.310479 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:08.301605   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.302332   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.304082   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.304629   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.306246   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:08.301605   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.302332   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.304082   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.304629   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:08.306246   13187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:10.810723 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:10.820844 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:10.820953 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:10.865862 1302865 cri.go:89] found id: ""
	I1213 14:58:10.865875 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.865882 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:10.865888 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:10.865959 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:10.896607 1302865 cri.go:89] found id: ""
	I1213 14:58:10.896621 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.896628 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:10.896634 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:10.896710 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:10.924657 1302865 cri.go:89] found id: ""
	I1213 14:58:10.924671 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.924678 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:10.924684 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:10.924748 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:10.949300 1302865 cri.go:89] found id: ""
	I1213 14:58:10.949314 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.949321 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:10.949326 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:10.949388 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:10.973896 1302865 cri.go:89] found id: ""
	I1213 14:58:10.973910 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.973917 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:10.973922 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:10.973983 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:10.998200 1302865 cri.go:89] found id: ""
	I1213 14:58:10.998214 1302865 logs.go:282] 0 containers: []
	W1213 14:58:10.998231 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:10.998237 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:10.998295 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:11.024841 1302865 cri.go:89] found id: ""
	I1213 14:58:11.024856 1302865 logs.go:282] 0 containers: []
	W1213 14:58:11.024863 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:11.024871 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:11.024886 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:11.092350 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:11.083613   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.084237   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.086051   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.086531   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.088132   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:11.083613   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.084237   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.086051   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.086531   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:11.088132   13270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:11.092361 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:11.092372 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:11.154591 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:11.154612 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:11.187883 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:11.187899 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:11.248594 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:11.248613 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:13.766160 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:13.776057 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:13.776115 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:13.800863 1302865 cri.go:89] found id: ""
	I1213 14:58:13.800877 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.800884 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:13.800889 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:13.800990 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:13.825283 1302865 cri.go:89] found id: ""
	I1213 14:58:13.825298 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.825305 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:13.825309 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:13.825368 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:13.857732 1302865 cri.go:89] found id: ""
	I1213 14:58:13.857746 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.857753 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:13.857758 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:13.857816 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:13.891546 1302865 cri.go:89] found id: ""
	I1213 14:58:13.891560 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.891566 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:13.891572 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:13.891629 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:13.918725 1302865 cri.go:89] found id: ""
	I1213 14:58:13.918738 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.918746 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:13.918750 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:13.918810 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:13.942434 1302865 cri.go:89] found id: ""
	I1213 14:58:13.942448 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.942455 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:13.942460 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:13.942521 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:13.966591 1302865 cri.go:89] found id: ""
	I1213 14:58:13.966606 1302865 logs.go:282] 0 containers: []
	W1213 14:58:13.966613 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:13.966621 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:13.966632 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:13.983200 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:13.983217 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:14.050601 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:14.041722   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.042310   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.044028   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.044598   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.046179   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:14.041722   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.042310   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.044028   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.044598   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:14.046179   13375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:14.050610 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:14.050622 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:14.111742 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:14.111761 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:14.139171 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:14.139189 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:16.694504 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:16.704690 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:16.704753 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:16.730421 1302865 cri.go:89] found id: ""
	I1213 14:58:16.730436 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.730444 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:16.730449 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:16.730510 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:16.755642 1302865 cri.go:89] found id: ""
	I1213 14:58:16.755657 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.755676 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:16.755681 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:16.755741 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:16.780583 1302865 cri.go:89] found id: ""
	I1213 14:58:16.780597 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.780604 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:16.780610 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:16.780685 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:16.809520 1302865 cri.go:89] found id: ""
	I1213 14:58:16.809534 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.809542 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:16.809547 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:16.809606 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:16.845772 1302865 cri.go:89] found id: ""
	I1213 14:58:16.845787 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.845794 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:16.845799 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:16.845867 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:16.871303 1302865 cri.go:89] found id: ""
	I1213 14:58:16.871338 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.871345 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:16.871350 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:16.871411 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:16.897846 1302865 cri.go:89] found id: ""
	I1213 14:58:16.897859 1302865 logs.go:282] 0 containers: []
	W1213 14:58:16.897866 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:16.897875 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:16.897885 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:16.959059 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:16.959079 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:16.996406 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:16.996421 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:17.052568 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:17.052589 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:17.069678 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:17.069696 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:17.133677 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:17.125422   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.125964   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.127591   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.128083   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.129662   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:17.125422   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.125964   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.127591   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.128083   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:17.129662   13497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:19.633920 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:19.644044 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:19.644109 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:19.668667 1302865 cri.go:89] found id: ""
	I1213 14:58:19.668681 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.668688 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:19.668693 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:19.668759 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:19.693045 1302865 cri.go:89] found id: ""
	I1213 14:58:19.693059 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.693066 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:19.693071 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:19.693134 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:19.717622 1302865 cri.go:89] found id: ""
	I1213 14:58:19.717637 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.717643 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:19.717649 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:19.717708 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:19.742933 1302865 cri.go:89] found id: ""
	I1213 14:58:19.742948 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.742954 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:19.742962 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:19.743024 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:19.767055 1302865 cri.go:89] found id: ""
	I1213 14:58:19.767069 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.767076 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:19.767081 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:19.767139 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:19.793086 1302865 cri.go:89] found id: ""
	I1213 14:58:19.793100 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.793107 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:19.793112 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:19.793172 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:19.816884 1302865 cri.go:89] found id: ""
	I1213 14:58:19.816898 1302865 logs.go:282] 0 containers: []
	W1213 14:58:19.816905 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:19.816912 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:19.816927 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:19.833746 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:19.833763 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:19.912181 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:19.904591   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.905016   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.906518   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.906824   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.908282   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:19.904591   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.905016   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.906518   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.906824   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:19.908282   13584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:19.912191 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:19.912202 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:19.973611 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:19.973631 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:20.005249 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:20.005269 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:22.571015 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:22.581487 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:22.581553 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:22.606385 1302865 cri.go:89] found id: ""
	I1213 14:58:22.606399 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.606405 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:22.606411 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:22.606466 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:22.631290 1302865 cri.go:89] found id: ""
	I1213 14:58:22.631304 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.631330 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:22.631341 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:22.631402 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:22.656039 1302865 cri.go:89] found id: ""
	I1213 14:58:22.656053 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.656059 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:22.656064 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:22.656123 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:22.680255 1302865 cri.go:89] found id: ""
	I1213 14:58:22.680268 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.680275 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:22.680281 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:22.680339 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:22.705412 1302865 cri.go:89] found id: ""
	I1213 14:58:22.705426 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.705434 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:22.705439 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:22.705501 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:22.729869 1302865 cri.go:89] found id: ""
	I1213 14:58:22.729885 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.729891 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:22.729897 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:22.729961 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:22.757980 1302865 cri.go:89] found id: ""
	I1213 14:58:22.757994 1302865 logs.go:282] 0 containers: []
	W1213 14:58:22.758001 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:22.758009 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:22.758022 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:22.774416 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:22.774433 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:22.850017 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:22.837138   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.837894   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.843564   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.844183   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.845796   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:22.837138   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.837894   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.843564   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.844183   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:22.845796   13683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:22.850034 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:22.850045 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:22.916305 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:22.916327 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:22.946422 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:22.946438 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:25.504766 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:25.515062 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:25.515129 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:25.539801 1302865 cri.go:89] found id: ""
	I1213 14:58:25.539815 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.539822 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:25.539827 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:25.539888 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:25.564134 1302865 cri.go:89] found id: ""
	I1213 14:58:25.564148 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.564155 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:25.564159 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:25.564218 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:25.588150 1302865 cri.go:89] found id: ""
	I1213 14:58:25.588165 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.588173 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:25.588178 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:25.588239 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:25.613567 1302865 cri.go:89] found id: ""
	I1213 14:58:25.613581 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.613588 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:25.613593 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:25.613659 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:25.643274 1302865 cri.go:89] found id: ""
	I1213 14:58:25.643290 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.643297 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:25.643303 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:25.643388 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:25.668136 1302865 cri.go:89] found id: ""
	I1213 14:58:25.668150 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.668157 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:25.668162 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:25.668223 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:25.693114 1302865 cri.go:89] found id: ""
	I1213 14:58:25.693128 1302865 logs.go:282] 0 containers: []
	W1213 14:58:25.693135 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:25.693143 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:25.693152 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:25.751087 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:25.751106 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:25.768578 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:25.768598 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:25.842306 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:25.826336   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.827040   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.829047   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.830610   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.833626   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:25.826336   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.827040   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.829047   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.830610   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:25.833626   13792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:25.842315 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:25.842325 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:25.934744 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:25.934771 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:28.468857 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:28.479478 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:28.479543 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:28.509273 1302865 cri.go:89] found id: ""
	I1213 14:58:28.509286 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.509293 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:28.509299 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:28.509360 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:28.535574 1302865 cri.go:89] found id: ""
	I1213 14:58:28.535588 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.535595 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:28.535601 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:28.535660 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:28.561231 1302865 cri.go:89] found id: ""
	I1213 14:58:28.561244 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.561251 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:28.561256 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:28.561316 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:28.586867 1302865 cri.go:89] found id: ""
	I1213 14:58:28.586881 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.586897 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:28.586903 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:28.586971 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:28.613781 1302865 cri.go:89] found id: ""
	I1213 14:58:28.613795 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.613802 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:28.613807 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:28.613865 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:28.639226 1302865 cri.go:89] found id: ""
	I1213 14:58:28.639247 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.639255 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:28.639260 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:28.639351 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:28.664957 1302865 cri.go:89] found id: ""
	I1213 14:58:28.664971 1302865 logs.go:282] 0 containers: []
	W1213 14:58:28.664977 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:28.664985 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:28.664995 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:28.681545 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:28.681562 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:28.746274 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:28.738221   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.738895   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.740429   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.740922   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.742410   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:28.738221   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.738895   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.740429   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.740922   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:28.742410   13897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:28.746286 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:28.746297 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:28.811866 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:28.811886 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:28.853916 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:28.853932 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:31.417796 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:31.427841 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:31.427906 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:31.454876 1302865 cri.go:89] found id: ""
	I1213 14:58:31.454890 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.454897 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:31.454903 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:31.454967 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:31.478745 1302865 cri.go:89] found id: ""
	I1213 14:58:31.478763 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.478770 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:31.478774 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:31.478834 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:31.504045 1302865 cri.go:89] found id: ""
	I1213 14:58:31.504059 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.504066 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:31.504071 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:31.504132 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:31.536667 1302865 cri.go:89] found id: ""
	I1213 14:58:31.536687 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.536694 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:31.536699 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:31.536759 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:31.561651 1302865 cri.go:89] found id: ""
	I1213 14:58:31.561665 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.561672 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:31.561679 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:31.561740 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:31.590467 1302865 cri.go:89] found id: ""
	I1213 14:58:31.590487 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.590494 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:31.590499 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:31.590572 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:31.621443 1302865 cri.go:89] found id: ""
	I1213 14:58:31.621457 1302865 logs.go:282] 0 containers: []
	W1213 14:58:31.621467 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:31.621475 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:31.621485 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:31.689190 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:31.680703   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.681594   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.682366   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.683913   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.684339   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:31.680703   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.681594   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.682366   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.683913   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:31.684339   13999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:31.689199 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:31.689210 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:31.750918 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:31.750940 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:31.777989 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:31.778007 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:31.837415 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:31.837438 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:34.355220 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:34.365583 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:34.365646 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:34.390861 1302865 cri.go:89] found id: ""
	I1213 14:58:34.390875 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.390882 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:34.390887 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:34.390945 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:34.419452 1302865 cri.go:89] found id: ""
	I1213 14:58:34.419466 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.419473 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:34.419478 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:34.419540 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:34.444048 1302865 cri.go:89] found id: ""
	I1213 14:58:34.444062 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.444069 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:34.444073 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:34.444135 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:34.472603 1302865 cri.go:89] found id: ""
	I1213 14:58:34.472617 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.472623 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:34.472629 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:34.472693 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:34.496330 1302865 cri.go:89] found id: ""
	I1213 14:58:34.496344 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.496351 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:34.496356 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:34.496415 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:34.521267 1302865 cri.go:89] found id: ""
	I1213 14:58:34.521281 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.521288 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:34.521294 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:34.521355 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:34.545219 1302865 cri.go:89] found id: ""
	I1213 14:58:34.545234 1302865 logs.go:282] 0 containers: []
	W1213 14:58:34.545241 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:34.545248 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:34.545263 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:34.611331 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:34.602304   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.603074   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.604885   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.605533   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.607098   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:34.602304   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.603074   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.604885   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.605533   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:34.607098   14103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:34.611342 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:34.611352 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:34.674005 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:34.674023 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:34.701768 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:34.701784 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:34.760313 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:34.760332 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:37.279813 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:37.289901 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:37.289961 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:37.314082 1302865 cri.go:89] found id: ""
	I1213 14:58:37.314097 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.314103 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:37.314115 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:37.314174 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:37.349456 1302865 cri.go:89] found id: ""
	I1213 14:58:37.349470 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.349477 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:37.349482 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:37.349540 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:37.376791 1302865 cri.go:89] found id: ""
	I1213 14:58:37.376805 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.376812 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:37.376817 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:37.376877 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:37.400702 1302865 cri.go:89] found id: ""
	I1213 14:58:37.400717 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.400724 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:37.400730 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:37.400792 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:37.424348 1302865 cri.go:89] found id: ""
	I1213 14:58:37.424363 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.424370 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:37.424375 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:37.424435 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:37.449182 1302865 cri.go:89] found id: ""
	I1213 14:58:37.449197 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.449204 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:37.449209 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:37.449270 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:37.476252 1302865 cri.go:89] found id: ""
	I1213 14:58:37.476266 1302865 logs.go:282] 0 containers: []
	W1213 14:58:37.476273 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:37.476280 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:37.476294 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:37.534602 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:37.534621 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:37.552019 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:37.552037 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:37.614270 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:37.605713   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.606478   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.608056   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.608699   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.610244   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:37.605713   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.606478   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.608056   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.608699   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:37.610244   14208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:37.614281 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:37.614292 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:37.676894 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:37.676913 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:40.209558 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:40.220003 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:40.220065 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:40.246553 1302865 cri.go:89] found id: ""
	I1213 14:58:40.246567 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.246574 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:40.246579 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:40.246642 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:40.270663 1302865 cri.go:89] found id: ""
	I1213 14:58:40.270677 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.270684 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:40.270689 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:40.270750 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:40.296263 1302865 cri.go:89] found id: ""
	I1213 14:58:40.296278 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.296285 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:40.296292 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:40.296352 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:40.320181 1302865 cri.go:89] found id: ""
	I1213 14:58:40.320195 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.320204 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:40.320208 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:40.320268 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:40.345140 1302865 cri.go:89] found id: ""
	I1213 14:58:40.345155 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.345162 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:40.345167 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:40.345236 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:40.368989 1302865 cri.go:89] found id: ""
	I1213 14:58:40.369003 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.369010 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:40.369015 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:40.369075 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:40.393631 1302865 cri.go:89] found id: ""
	I1213 14:58:40.393646 1302865 logs.go:282] 0 containers: []
	W1213 14:58:40.393653 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:40.393661 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:40.393672 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:40.421318 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:40.421334 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:40.480359 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:40.480379 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:40.497525 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:40.497544 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:40.565603 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:40.557124   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.557721   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.559295   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.559804   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.561673   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:40.557124   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.557721   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.559295   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.559804   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:40.561673   14322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:40.565614 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:40.565625 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:43.127433 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:43.141684 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:43.141744 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:43.166921 1302865 cri.go:89] found id: ""
	I1213 14:58:43.166935 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.166942 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:43.166947 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:43.167010 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:43.191796 1302865 cri.go:89] found id: ""
	I1213 14:58:43.191810 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.191817 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:43.191823 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:43.191883 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:43.220968 1302865 cri.go:89] found id: ""
	I1213 14:58:43.220982 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.220988 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:43.220993 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:43.221050 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:43.249138 1302865 cri.go:89] found id: ""
	I1213 14:58:43.249153 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.249160 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:43.249166 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:43.249226 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:43.273972 1302865 cri.go:89] found id: ""
	I1213 14:58:43.273986 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.273993 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:43.273998 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:43.274056 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:43.298424 1302865 cri.go:89] found id: ""
	I1213 14:58:43.298439 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.298446 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:43.298451 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:43.298523 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:43.326886 1302865 cri.go:89] found id: ""
	I1213 14:58:43.326900 1302865 logs.go:282] 0 containers: []
	W1213 14:58:43.326907 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:43.326915 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:43.326925 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:43.383183 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:43.383202 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:43.401545 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:43.401564 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:43.472321 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:43.463674   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.464321   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.466101   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.466707   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.468411   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:43.463674   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.464321   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.466101   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.466707   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:43.468411   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:43.472331 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:43.472347 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:43.535483 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:43.535504 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:46.069443 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:46.079671 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:46.079735 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:46.112232 1302865 cri.go:89] found id: ""
	I1213 14:58:46.112246 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.112263 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:46.112268 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:46.112334 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:46.143946 1302865 cri.go:89] found id: ""
	I1213 14:58:46.143960 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.143968 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:46.143973 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:46.144034 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:46.172869 1302865 cri.go:89] found id: ""
	I1213 14:58:46.172893 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.172901 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:46.172906 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:46.172969 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:46.198118 1302865 cri.go:89] found id: ""
	I1213 14:58:46.198132 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.198139 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:46.198144 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:46.198210 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:46.226657 1302865 cri.go:89] found id: ""
	I1213 14:58:46.226672 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.226679 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:46.226689 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:46.226750 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:46.250158 1302865 cri.go:89] found id: ""
	I1213 14:58:46.250183 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.250190 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:46.250199 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:46.250268 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:46.275259 1302865 cri.go:89] found id: ""
	I1213 14:58:46.275274 1302865 logs.go:282] 0 containers: []
	W1213 14:58:46.275281 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:46.275303 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:46.275335 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:46.349416 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:46.340779   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.341521   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.343041   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.343652   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.345289   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:46.340779   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.341521   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.343041   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.343652   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:46.345289   14511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:46.349427 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:46.349440 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:46.412854 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:46.412874 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:46.443625 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:46.443641 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:46.501088 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:46.501108 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:49.018999 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:49.029334 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:49.029404 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:49.054853 1302865 cri.go:89] found id: ""
	I1213 14:58:49.054867 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.054874 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:49.054879 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:49.054941 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:49.081166 1302865 cri.go:89] found id: ""
	I1213 14:58:49.081185 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.081193 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:49.081198 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:49.081261 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:49.109404 1302865 cri.go:89] found id: ""
	I1213 14:58:49.109418 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.109425 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:49.109430 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:49.109493 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:49.136643 1302865 cri.go:89] found id: ""
	I1213 14:58:49.136658 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.136665 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:49.136670 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:49.136741 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:49.165751 1302865 cri.go:89] found id: ""
	I1213 14:58:49.165765 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.165772 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:49.165777 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:49.165837 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:49.193225 1302865 cri.go:89] found id: ""
	I1213 14:58:49.193239 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.193246 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:49.193252 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:49.193314 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:49.221440 1302865 cri.go:89] found id: ""
	I1213 14:58:49.221455 1302865 logs.go:282] 0 containers: []
	W1213 14:58:49.221462 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:49.221470 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:49.221480 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:49.277216 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:49.277234 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:49.293907 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:49.293927 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:49.356075 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:49.348073   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.348458   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.350225   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.350571   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.352111   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:49.348073   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.348458   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.350225   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.350571   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:49.352111   14621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:49.356085 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:49.356095 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:49.418015 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:49.418034 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:51.951013 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:51.961457 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:51.961522 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:51.988624 1302865 cri.go:89] found id: ""
	I1213 14:58:51.988638 1302865 logs.go:282] 0 containers: []
	W1213 14:58:51.988645 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:51.988650 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:51.988725 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:52.015499 1302865 cri.go:89] found id: ""
	I1213 14:58:52.015513 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.015520 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:52.015526 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:52.015589 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:52.041762 1302865 cri.go:89] found id: ""
	I1213 14:58:52.041777 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.041784 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:52.041789 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:52.041850 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:52.068323 1302865 cri.go:89] found id: ""
	I1213 14:58:52.068338 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.068345 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:52.068350 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:52.068415 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:52.106065 1302865 cri.go:89] found id: ""
	I1213 14:58:52.106079 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.106086 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:52.106091 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:52.106160 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:52.140252 1302865 cri.go:89] found id: ""
	I1213 14:58:52.140272 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.140279 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:52.140284 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:52.140343 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:52.167100 1302865 cri.go:89] found id: ""
	I1213 14:58:52.167113 1302865 logs.go:282] 0 containers: []
	W1213 14:58:52.167120 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:52.167128 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:52.167138 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:52.226191 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:52.226210 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:52.243667 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:52.243683 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:52.311033 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:52.302537   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.303096   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.304747   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.305085   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.306664   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:52.302537   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.303096   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.304747   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.305085   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:52.306664   14727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:52.311046 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:52.311057 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:52.372679 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:52.372703 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:54.903108 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:54.913373 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:54.913436 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:54.938658 1302865 cri.go:89] found id: ""
	I1213 14:58:54.938673 1302865 logs.go:282] 0 containers: []
	W1213 14:58:54.938680 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:54.938686 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:54.938753 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:54.962838 1302865 cri.go:89] found id: ""
	I1213 14:58:54.962851 1302865 logs.go:282] 0 containers: []
	W1213 14:58:54.962866 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:54.962871 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:54.962942 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:54.988758 1302865 cri.go:89] found id: ""
	I1213 14:58:54.988773 1302865 logs.go:282] 0 containers: []
	W1213 14:58:54.988780 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:54.988785 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:54.988855 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:55.021177 1302865 cri.go:89] found id: ""
	I1213 14:58:55.021192 1302865 logs.go:282] 0 containers: []
	W1213 14:58:55.021200 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:55.021206 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:55.021272 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:55.049330 1302865 cri.go:89] found id: ""
	I1213 14:58:55.049344 1302865 logs.go:282] 0 containers: []
	W1213 14:58:55.049356 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:55.049361 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:55.049421 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:55.079835 1302865 cri.go:89] found id: ""
	I1213 14:58:55.079849 1302865 logs.go:282] 0 containers: []
	W1213 14:58:55.079856 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:55.079861 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:55.079920 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:55.107073 1302865 cri.go:89] found id: ""
	I1213 14:58:55.107087 1302865 logs.go:282] 0 containers: []
	W1213 14:58:55.107094 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:55.107102 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:55.107112 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:55.165853 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:55.165871 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:55.183109 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:55.183127 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:55.251642 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:55.242691   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.243211   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.244929   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.245470   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.247181   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:55.242691   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.243211   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.244929   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.245470   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:55.247181   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:55.251652 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:55.251664 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:55.317380 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:55.317399 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:58:57.847271 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:58:57.857537 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:58:57.857603 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:58:57.882391 1302865 cri.go:89] found id: ""
	I1213 14:58:57.882405 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.882412 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:58:57.882417 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:58:57.882490 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:58:57.905909 1302865 cri.go:89] found id: ""
	I1213 14:58:57.905923 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.905943 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:58:57.905948 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:58:57.906018 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:58:57.930237 1302865 cri.go:89] found id: ""
	I1213 14:58:57.930252 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.930259 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:58:57.930264 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:58:57.930337 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:58:57.958985 1302865 cri.go:89] found id: ""
	I1213 14:58:57.959014 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.959020 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:58:57.959031 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:58:57.959099 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:58:57.983693 1302865 cri.go:89] found id: ""
	I1213 14:58:57.983707 1302865 logs.go:282] 0 containers: []
	W1213 14:58:57.983714 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:58:57.983719 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:58:57.983779 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:58:58.012155 1302865 cri.go:89] found id: ""
	I1213 14:58:58.012170 1302865 logs.go:282] 0 containers: []
	W1213 14:58:58.012178 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:58:58.012183 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:58:58.012250 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:58:58.043700 1302865 cri.go:89] found id: ""
	I1213 14:58:58.043714 1302865 logs.go:282] 0 containers: []
	W1213 14:58:58.043722 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:58:58.043730 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:58:58.043742 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:58:58.105070 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:58:58.105098 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:58:58.123698 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:58:58.123717 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:58:58.194632 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:58:58.186276   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.187012   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.188759   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.189247   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.190768   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:58:58.186276   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.187012   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.188759   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.189247   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:58:58.190768   14937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:58:58.194642 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:58:58.194653 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:58:58.256210 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:58:58.256230 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:00.787680 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:00.798261 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:00.798326 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:00.826895 1302865 cri.go:89] found id: ""
	I1213 14:59:00.826908 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.826915 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:00.826921 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:00.826980 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:00.851410 1302865 cri.go:89] found id: ""
	I1213 14:59:00.851424 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.851431 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:00.851437 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:00.851510 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:00.876891 1302865 cri.go:89] found id: ""
	I1213 14:59:00.876906 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.876912 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:00.876917 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:00.876975 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:00.900564 1302865 cri.go:89] found id: ""
	I1213 14:59:00.900578 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.900585 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:00.900589 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:00.900647 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:00.925560 1302865 cri.go:89] found id: ""
	I1213 14:59:00.925574 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.925581 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:00.925586 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:00.925647 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:00.954298 1302865 cri.go:89] found id: ""
	I1213 14:59:00.954311 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.954319 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:00.954330 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:00.954388 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:00.980684 1302865 cri.go:89] found id: ""
	I1213 14:59:00.980698 1302865 logs.go:282] 0 containers: []
	W1213 14:59:00.980704 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:00.980718 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:00.980731 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:01.048024 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:01.039594   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.040152   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.041748   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.042294   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.043863   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:01.039594   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.040152   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.041748   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.042294   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:01.043863   15034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:01.048033 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:01.048044 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:01.110723 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:01.110742 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:01.144966 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:01.144983 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:01.203272 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:01.203301 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:03.722770 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:03.733112 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:03.733170 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:03.761042 1302865 cri.go:89] found id: ""
	I1213 14:59:03.761057 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.761064 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:03.761069 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:03.761130 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:03.789429 1302865 cri.go:89] found id: ""
	I1213 14:59:03.789443 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.789450 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:03.789455 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:03.789521 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:03.816916 1302865 cri.go:89] found id: ""
	I1213 14:59:03.816930 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.816937 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:03.816942 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:03.817001 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:03.844301 1302865 cri.go:89] found id: ""
	I1213 14:59:03.844317 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.844324 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:03.844329 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:03.844388 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:03.873060 1302865 cri.go:89] found id: ""
	I1213 14:59:03.873075 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.873082 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:03.873087 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:03.873147 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:03.910513 1302865 cri.go:89] found id: ""
	I1213 14:59:03.910527 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.910534 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:03.910539 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:03.910601 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:03.938039 1302865 cri.go:89] found id: ""
	I1213 14:59:03.938053 1302865 logs.go:282] 0 containers: []
	W1213 14:59:03.938060 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:03.938067 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:03.938077 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:03.993458 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:03.993478 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:04.011140 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:04.011157 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:04.078339 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:04.069502   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.070272   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.072094   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.072604   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.074176   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:04.069502   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.070272   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.072094   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.072604   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:04.074176   15144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:04.078350 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:04.078361 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:04.142915 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:04.142934 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:06.673444 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:06.683643 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:06.683703 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:06.708707 1302865 cri.go:89] found id: ""
	I1213 14:59:06.708727 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.708734 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:06.708739 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:06.708799 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:06.734465 1302865 cri.go:89] found id: ""
	I1213 14:59:06.734479 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.734486 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:06.734495 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:06.734584 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:06.759590 1302865 cri.go:89] found id: ""
	I1213 14:59:06.759603 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.759610 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:06.759615 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:06.759674 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:06.785693 1302865 cri.go:89] found id: ""
	I1213 14:59:06.785706 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.785713 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:06.785720 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:06.785777 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:06.810125 1302865 cri.go:89] found id: ""
	I1213 14:59:06.810139 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.810146 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:06.810151 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:06.810215 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:06.835783 1302865 cri.go:89] found id: ""
	I1213 14:59:06.835797 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.835804 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:06.835809 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:06.835869 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:06.860909 1302865 cri.go:89] found id: ""
	I1213 14:59:06.860922 1302865 logs.go:282] 0 containers: []
	W1213 14:59:06.860929 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:06.860936 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:06.860946 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:06.916027 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:06.916047 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:06.933118 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:06.933135 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:06.997759 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:06.989536   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.990242   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.991821   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.992353   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.993901   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:06.989536   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.990242   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.991821   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.992353   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:06.993901   15244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:06.997769 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:06.997779 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:07.059939 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:07.059961 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:09.591076 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:09.601913 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:09.601975 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:09.626204 1302865 cri.go:89] found id: ""
	I1213 14:59:09.626218 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.626225 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:09.626230 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:09.626289 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:09.653443 1302865 cri.go:89] found id: ""
	I1213 14:59:09.653457 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.653463 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:09.653469 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:09.653531 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:09.678836 1302865 cri.go:89] found id: ""
	I1213 14:59:09.678851 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.678858 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:09.678865 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:09.678924 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:09.704492 1302865 cri.go:89] found id: ""
	I1213 14:59:09.704506 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.704514 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:09.704519 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:09.704581 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:09.733333 1302865 cri.go:89] found id: ""
	I1213 14:59:09.733355 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.733363 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:09.733368 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:09.733431 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:09.758847 1302865 cri.go:89] found id: ""
	I1213 14:59:09.758861 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.758869 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:09.758874 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:09.758946 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:09.785932 1302865 cri.go:89] found id: ""
	I1213 14:59:09.785946 1302865 logs.go:282] 0 containers: []
	W1213 14:59:09.785953 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:09.785962 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:09.785973 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:09.842054 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:09.842073 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:09.859249 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:09.859273 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:09.924527 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:09.916673   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.917242   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.918722   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.919219   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.920662   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:09.916673   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.917242   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.918722   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.919219   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:09.920662   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:09.924536 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:09.924546 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:09.987531 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:09.987550 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:12.517373 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:12.529230 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:12.529292 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:12.558354 1302865 cri.go:89] found id: ""
	I1213 14:59:12.558368 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.558375 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:12.558380 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:12.558439 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:12.585312 1302865 cri.go:89] found id: ""
	I1213 14:59:12.585326 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.585333 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:12.585338 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:12.585396 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:12.613481 1302865 cri.go:89] found id: ""
	I1213 14:59:12.613494 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.613501 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:12.613506 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:12.613564 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:12.636592 1302865 cri.go:89] found id: ""
	I1213 14:59:12.636614 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.636621 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:12.636627 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:12.636694 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:12.660499 1302865 cri.go:89] found id: ""
	I1213 14:59:12.660513 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.660520 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:12.660524 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:12.660591 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:12.684274 1302865 cri.go:89] found id: ""
	I1213 14:59:12.684297 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.684304 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:12.684309 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:12.684377 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:12.715959 1302865 cri.go:89] found id: ""
	I1213 14:59:12.715973 1302865 logs.go:282] 0 containers: []
	W1213 14:59:12.715980 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:12.715992 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:12.716003 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:12.779780 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:12.771561   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.772194   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.773845   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.774330   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.775929   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:12.771561   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.772194   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.773845   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.774330   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:12.775929   15442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:12.779790 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:12.779801 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:12.840858 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:12.840877 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:12.870238 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:12.870256 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:12.930596 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:12.930615 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:15.449328 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:15.460194 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:15.460255 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:15.484663 1302865 cri.go:89] found id: ""
	I1213 14:59:15.484677 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.484683 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:15.484689 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:15.484799 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:15.513604 1302865 cri.go:89] found id: ""
	I1213 14:59:15.513619 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.513626 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:15.513631 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:15.513692 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:15.543496 1302865 cri.go:89] found id: ""
	I1213 14:59:15.543510 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.543517 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:15.543524 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:15.543596 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:15.576119 1302865 cri.go:89] found id: ""
	I1213 14:59:15.576133 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.576140 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:15.576145 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:15.576207 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:15.600649 1302865 cri.go:89] found id: ""
	I1213 14:59:15.600663 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.600670 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:15.600675 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:15.600743 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:15.624956 1302865 cri.go:89] found id: ""
	I1213 14:59:15.624970 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.624977 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:15.624984 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:15.625045 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:15.649687 1302865 cri.go:89] found id: ""
	I1213 14:59:15.649700 1302865 logs.go:282] 0 containers: []
	W1213 14:59:15.649707 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:15.649717 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:15.649728 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:15.711417 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:15.711439 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:15.739859 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:15.739876 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:15.796008 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:15.796027 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:15.813254 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:15.813271 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:15.889756 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:15.881341   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.881741   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.883505   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.883986   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.885525   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:15.881341   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.881741   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.883505   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.883986   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:15.885525   15566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:18.390805 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:18.401397 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:18.401458 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:18.426479 1302865 cri.go:89] found id: ""
	I1213 14:59:18.426493 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.426501 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:18.426507 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:18.426569 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:18.451763 1302865 cri.go:89] found id: ""
	I1213 14:59:18.451777 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.451784 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:18.451788 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:18.451846 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:18.475994 1302865 cri.go:89] found id: ""
	I1213 14:59:18.476008 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.476015 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:18.476020 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:18.476080 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:18.500350 1302865 cri.go:89] found id: ""
	I1213 14:59:18.500363 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.500371 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:18.500376 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:18.500436 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:18.524126 1302865 cri.go:89] found id: ""
	I1213 14:59:18.524178 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.524186 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:18.524191 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:18.524251 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:18.552637 1302865 cri.go:89] found id: ""
	I1213 14:59:18.552650 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.552657 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:18.552668 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:18.552735 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:18.576409 1302865 cri.go:89] found id: ""
	I1213 14:59:18.576423 1302865 logs.go:282] 0 containers: []
	W1213 14:59:18.576430 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:18.576437 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:18.576448 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:18.632727 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:18.632750 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:18.649857 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:18.649874 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:18.717909 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:18.709647   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.710444   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.712120   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.712687   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.714255   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:18.709647   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.710444   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.712120   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.712687   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:18.714255   15657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:18.717920 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:18.717930 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:18.779709 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:18.779731 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:21.307289 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:21.317675 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:21.317738 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:21.357856 1302865 cri.go:89] found id: ""
	I1213 14:59:21.357870 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.357886 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:21.357892 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:21.357952 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:21.383442 1302865 cri.go:89] found id: ""
	I1213 14:59:21.383456 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.383478 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:21.383483 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:21.383550 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:21.410523 1302865 cri.go:89] found id: ""
	I1213 14:59:21.410537 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.410544 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:21.410549 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:21.410606 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:21.437275 1302865 cri.go:89] found id: ""
	I1213 14:59:21.437289 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.437296 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:21.437303 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:21.437361 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:21.460786 1302865 cri.go:89] found id: ""
	I1213 14:59:21.460800 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.460807 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:21.460813 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:21.460871 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:21.484394 1302865 cri.go:89] found id: ""
	I1213 14:59:21.484409 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.484416 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:21.484422 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:21.484481 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:21.513384 1302865 cri.go:89] found id: ""
	I1213 14:59:21.513398 1302865 logs.go:282] 0 containers: []
	W1213 14:59:21.513405 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:21.513413 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:21.513423 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:21.568892 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:21.568912 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:21.586837 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:21.586854 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:21.662678 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:21.654029   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.654719   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.656490   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.657142   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.658705   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:21.654029   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.654719   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.656490   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.657142   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:21.658705   15763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:21.662688 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:21.662699 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:21.736289 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:21.736318 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:24.267273 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:24.277337 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:24.277401 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:24.300799 1302865 cri.go:89] found id: ""
	I1213 14:59:24.300813 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.300820 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:24.300825 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:24.300883 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:24.329119 1302865 cri.go:89] found id: ""
	I1213 14:59:24.329133 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.329140 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:24.329145 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:24.329207 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:24.359906 1302865 cri.go:89] found id: ""
	I1213 14:59:24.359920 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.359927 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:24.359934 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:24.359993 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:24.388174 1302865 cri.go:89] found id: ""
	I1213 14:59:24.388188 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.388195 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:24.388201 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:24.388265 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:24.416221 1302865 cri.go:89] found id: ""
	I1213 14:59:24.416235 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.416242 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:24.416247 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:24.416306 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:24.441358 1302865 cri.go:89] found id: ""
	I1213 14:59:24.441373 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.441380 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:24.441385 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:24.441444 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:24.465868 1302865 cri.go:89] found id: ""
	I1213 14:59:24.465882 1302865 logs.go:282] 0 containers: []
	W1213 14:59:24.465889 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:24.465897 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:24.465907 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:24.522170 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:24.522189 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:24.539720 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:24.539741 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:24.605986 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:24.597621   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.598252   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.599831   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.600201   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.601630   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:24.597621   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.598252   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.599831   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.600201   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:24.601630   15867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:24.605996 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:24.606007 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:24.667358 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:24.667377 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:27.195225 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:27.205377 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:27.205438 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:27.229665 1302865 cri.go:89] found id: ""
	I1213 14:59:27.229679 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.229686 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:27.229692 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:27.229755 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:27.253927 1302865 cri.go:89] found id: ""
	I1213 14:59:27.253943 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.253950 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:27.253961 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:27.254022 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:27.277865 1302865 cri.go:89] found id: ""
	I1213 14:59:27.277879 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.277886 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:27.277891 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:27.277949 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:27.305956 1302865 cri.go:89] found id: ""
	I1213 14:59:27.305969 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.305977 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:27.305982 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:27.306041 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:27.330227 1302865 cri.go:89] found id: ""
	I1213 14:59:27.330241 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.330248 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:27.330253 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:27.330312 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:27.367738 1302865 cri.go:89] found id: ""
	I1213 14:59:27.367752 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.367759 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:27.367764 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:27.367823 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:27.400224 1302865 cri.go:89] found id: ""
	I1213 14:59:27.400239 1302865 logs.go:282] 0 containers: []
	W1213 14:59:27.400254 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:27.400262 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:27.400272 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:27.428506 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:27.428525 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:27.484755 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:27.484775 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:27.501783 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:27.501800 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:27.568006 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:27.559400   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.559958   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.561857   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.562433   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.564051   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:27.559400   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.559958   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.561857   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.562433   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:27.564051   15984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:27.568017 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:27.568029 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:30.130924 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:30.142124 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:30.142187 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:30.168272 1302865 cri.go:89] found id: ""
	I1213 14:59:30.168286 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.168301 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:30.168306 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:30.168379 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:30.198491 1302865 cri.go:89] found id: ""
	I1213 14:59:30.198507 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.198515 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:30.198520 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:30.198583 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:30.224307 1302865 cri.go:89] found id: ""
	I1213 14:59:30.224321 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.224329 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:30.224334 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:30.224398 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:30.252127 1302865 cri.go:89] found id: ""
	I1213 14:59:30.252142 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.252150 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:30.252155 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:30.252216 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:30.277686 1302865 cri.go:89] found id: ""
	I1213 14:59:30.277700 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.277707 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:30.277712 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:30.277773 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:30.302751 1302865 cri.go:89] found id: ""
	I1213 14:59:30.302766 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.302773 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:30.302779 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:30.302864 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:30.331699 1302865 cri.go:89] found id: ""
	I1213 14:59:30.331713 1302865 logs.go:282] 0 containers: []
	W1213 14:59:30.331720 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:30.331727 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:30.331741 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:30.384091 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:30.384107 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:30.448178 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:30.448197 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:30.465395 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:30.465414 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:30.525911 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:30.518056   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.518498   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.519701   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.520227   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.521907   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:30.518056   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.518498   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.519701   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.520227   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:30.521907   16090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:30.525921 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:30.525931 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:33.088366 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:33.098677 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:33.098747 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:33.123559 1302865 cri.go:89] found id: ""
	I1213 14:59:33.123574 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.123581 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:33.123587 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:33.123648 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:33.149199 1302865 cri.go:89] found id: ""
	I1213 14:59:33.149214 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.149221 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:33.149231 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:33.149294 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:33.174660 1302865 cri.go:89] found id: ""
	I1213 14:59:33.174674 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.174681 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:33.174686 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:33.174747 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:33.199686 1302865 cri.go:89] found id: ""
	I1213 14:59:33.199701 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.199709 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:33.199714 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:33.199776 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:33.223975 1302865 cri.go:89] found id: ""
	I1213 14:59:33.223990 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.223997 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:33.224002 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:33.224062 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:33.248004 1302865 cri.go:89] found id: ""
	I1213 14:59:33.248019 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.248026 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:33.248032 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:33.248099 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:33.272806 1302865 cri.go:89] found id: ""
	I1213 14:59:33.272821 1302865 logs.go:282] 0 containers: []
	W1213 14:59:33.272829 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:33.272837 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:33.272847 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:33.300705 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:33.300722 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:33.363767 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:33.363786 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:33.382421 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:33.382440 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:33.450503 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:33.442115   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.442703   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.444236   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.444803   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.446325   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:33.442115   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.442703   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.444236   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.444803   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:33.446325   16194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:33.450514 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:33.450526 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:36.015724 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:36.026901 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:36.026965 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:36.053629 1302865 cri.go:89] found id: ""
	I1213 14:59:36.053645 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.053653 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:36.053658 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:36.053722 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:36.080154 1302865 cri.go:89] found id: ""
	I1213 14:59:36.080170 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.080177 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:36.080183 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:36.080247 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:36.105197 1302865 cri.go:89] found id: ""
	I1213 14:59:36.105212 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.105219 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:36.105224 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:36.105284 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:36.129426 1302865 cri.go:89] found id: ""
	I1213 14:59:36.129440 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.129453 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:36.129458 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:36.129516 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:36.157680 1302865 cri.go:89] found id: ""
	I1213 14:59:36.157695 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.157702 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:36.157707 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:36.157768 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:36.186306 1302865 cri.go:89] found id: ""
	I1213 14:59:36.186320 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.186327 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:36.186333 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:36.186404 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:36.210490 1302865 cri.go:89] found id: ""
	I1213 14:59:36.210504 1302865 logs.go:282] 0 containers: []
	W1213 14:59:36.210511 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:36.210518 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:36.210528 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:36.265225 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:36.265244 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:36.282625 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:36.282641 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:36.356056 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:36.344168   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.345304   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.346780   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.347080   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.348284   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:36.344168   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.345304   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.346780   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.347080   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:36.348284   16286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:36.356066 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:36.356078 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:36.426572 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:36.426595 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:38.953386 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:38.964071 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:38.964149 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:38.987398 1302865 cri.go:89] found id: ""
	I1213 14:59:38.987412 1302865 logs.go:282] 0 containers: []
	W1213 14:59:38.987420 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:38.987426 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:38.987501 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:39.014333 1302865 cri.go:89] found id: ""
	I1213 14:59:39.014348 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.014355 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:39.014360 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:39.014425 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:39.041685 1302865 cri.go:89] found id: ""
	I1213 14:59:39.041699 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.041706 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:39.041711 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:39.041773 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:39.065151 1302865 cri.go:89] found id: ""
	I1213 14:59:39.065165 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.065172 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:39.065177 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:39.065236 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:39.089601 1302865 cri.go:89] found id: ""
	I1213 14:59:39.089614 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.089621 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:39.089629 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:39.089695 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:39.114392 1302865 cri.go:89] found id: ""
	I1213 14:59:39.114406 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.114413 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:39.114418 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:39.114479 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:39.139175 1302865 cri.go:89] found id: ""
	I1213 14:59:39.139188 1302865 logs.go:282] 0 containers: []
	W1213 14:59:39.139195 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:39.139204 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:39.139214 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:39.194900 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:39.194920 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:39.212516 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:39.212534 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:39.278353 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:39.270327   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.270899   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.272513   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.272875   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.274310   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:39.270327   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.270899   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.272513   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.272875   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:39.274310   16391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:39.278363 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:39.278376 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:39.339218 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:39.339237 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:41.878578 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:41.888870 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:41.888930 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:41.916325 1302865 cri.go:89] found id: ""
	I1213 14:59:41.916339 1302865 logs.go:282] 0 containers: []
	W1213 14:59:41.916346 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:41.916352 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:41.916408 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:41.940631 1302865 cri.go:89] found id: ""
	I1213 14:59:41.940646 1302865 logs.go:282] 0 containers: []
	W1213 14:59:41.940653 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:41.940658 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:41.940721 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:41.964819 1302865 cri.go:89] found id: ""
	I1213 14:59:41.964835 1302865 logs.go:282] 0 containers: []
	W1213 14:59:41.964842 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:41.964847 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:41.964909 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:41.992880 1302865 cri.go:89] found id: ""
	I1213 14:59:41.992895 1302865 logs.go:282] 0 containers: []
	W1213 14:59:41.992902 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:41.992907 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:41.992966 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:42.037181 1302865 cri.go:89] found id: ""
	I1213 14:59:42.037196 1302865 logs.go:282] 0 containers: []
	W1213 14:59:42.037203 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:42.037208 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:42.037272 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:42.066224 1302865 cri.go:89] found id: ""
	I1213 14:59:42.066240 1302865 logs.go:282] 0 containers: []
	W1213 14:59:42.066247 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:42.066253 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:42.066324 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:42.113241 1302865 cri.go:89] found id: ""
	I1213 14:59:42.113259 1302865 logs.go:282] 0 containers: []
	W1213 14:59:42.113267 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:42.113275 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:42.113288 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:42.174660 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:42.174686 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:42.197359 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:42.197391 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:42.287788 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:42.278004   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.278734   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.280708   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.281502   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.282312   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:42.278004   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.278734   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.280708   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.281502   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:42.282312   16496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:42.287799 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:42.287810 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:42.353033 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:42.353052 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:44.892059 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:44.902815 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:44.902875 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:44.927725 1302865 cri.go:89] found id: ""
	I1213 14:59:44.927740 1302865 logs.go:282] 0 containers: []
	W1213 14:59:44.927747 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:44.927752 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:44.927815 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:44.957287 1302865 cri.go:89] found id: ""
	I1213 14:59:44.957301 1302865 logs.go:282] 0 containers: []
	W1213 14:59:44.957308 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:44.957313 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:44.957371 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:44.982138 1302865 cri.go:89] found id: ""
	I1213 14:59:44.982153 1302865 logs.go:282] 0 containers: []
	W1213 14:59:44.982160 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:44.982166 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:44.982225 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:45.025671 1302865 cri.go:89] found id: ""
	I1213 14:59:45.025689 1302865 logs.go:282] 0 containers: []
	W1213 14:59:45.025697 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:45.025704 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:45.025777 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:45.070096 1302865 cri.go:89] found id: ""
	I1213 14:59:45.070112 1302865 logs.go:282] 0 containers: []
	W1213 14:59:45.070121 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:45.070126 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:45.070203 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:45.113264 1302865 cri.go:89] found id: ""
	I1213 14:59:45.113281 1302865 logs.go:282] 0 containers: []
	W1213 14:59:45.113289 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:45.113302 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:45.113391 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:45.146027 1302865 cri.go:89] found id: ""
	I1213 14:59:45.146050 1302865 logs.go:282] 0 containers: []
	W1213 14:59:45.146058 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:45.146073 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:45.146084 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:45.242018 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:45.242086 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:45.278598 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:45.278619 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:45.377053 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:45.367099   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.369078   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.369934   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.371774   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.372065   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:45.367099   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.369078   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.369934   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.371774   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:45.372065   16602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:45.377063 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:45.377073 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:45.449162 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:45.449183 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:47.980927 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:47.991934 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:47.991998 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:48.022075 1302865 cri.go:89] found id: ""
	I1213 14:59:48.022091 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.022098 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:48.022103 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:48.022169 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:48.052438 1302865 cri.go:89] found id: ""
	I1213 14:59:48.052454 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.052461 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:48.052466 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:48.052543 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:48.077918 1302865 cri.go:89] found id: ""
	I1213 14:59:48.077932 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.077940 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:48.077945 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:48.078008 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:48.107677 1302865 cri.go:89] found id: ""
	I1213 14:59:48.107691 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.107698 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:48.107703 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:48.107803 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:48.134492 1302865 cri.go:89] found id: ""
	I1213 14:59:48.134506 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.134514 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:48.134523 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:48.134616 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:48.159260 1302865 cri.go:89] found id: ""
	I1213 14:59:48.159274 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.159281 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:48.159286 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:48.159368 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:48.184905 1302865 cri.go:89] found id: ""
	I1213 14:59:48.184920 1302865 logs.go:282] 0 containers: []
	W1213 14:59:48.184927 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:48.184935 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:48.184945 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:48.240512 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:48.240535 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:48.257663 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:48.257683 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:48.323284 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:48.314914   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.315437   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.317182   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.317842   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.319544   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:48.314914   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.315437   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.317182   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.317842   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:48.319544   16711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:48.323295 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:48.323306 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:48.393384 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:48.393403 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:50.925922 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:50.936831 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:50.936895 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:50.963232 1302865 cri.go:89] found id: ""
	I1213 14:59:50.963246 1302865 logs.go:282] 0 containers: []
	W1213 14:59:50.963253 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:50.963258 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:50.963354 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:50.993552 1302865 cri.go:89] found id: ""
	I1213 14:59:50.993566 1302865 logs.go:282] 0 containers: []
	W1213 14:59:50.993572 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:50.993578 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:50.993639 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:51.021945 1302865 cri.go:89] found id: ""
	I1213 14:59:51.021978 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.021986 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:51.021991 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:51.022051 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:51.049002 1302865 cri.go:89] found id: ""
	I1213 14:59:51.049017 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.049024 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:51.049029 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:51.049113 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:51.075979 1302865 cri.go:89] found id: ""
	I1213 14:59:51.075995 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.076003 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:51.076008 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:51.076071 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:51.101633 1302865 cri.go:89] found id: ""
	I1213 14:59:51.101648 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.101656 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:51.101661 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:51.101724 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:51.128983 1302865 cri.go:89] found id: ""
	I1213 14:59:51.128999 1302865 logs.go:282] 0 containers: []
	W1213 14:59:51.129007 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:51.129015 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:51.129025 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:51.185511 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:51.185538 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:51.203284 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:51.203306 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:51.265859 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:51.257020   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.257804   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.259431   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.260025   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.261672   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:51.257020   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.257804   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.259431   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.260025   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:51.261672   16816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:51.265869 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:51.265880 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:51.328096 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:51.328116 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:53.857136 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:53.867344 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:53.867405 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:53.890843 1302865 cri.go:89] found id: ""
	I1213 14:59:53.890857 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.890864 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:53.890869 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:53.890927 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:53.915236 1302865 cri.go:89] found id: ""
	I1213 14:59:53.915250 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.915258 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:53.915263 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:53.915341 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:53.939500 1302865 cri.go:89] found id: ""
	I1213 14:59:53.939515 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.939523 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:53.939528 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:53.939588 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:53.968671 1302865 cri.go:89] found id: ""
	I1213 14:59:53.968686 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.968693 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:53.968698 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:53.968766 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:53.992869 1302865 cri.go:89] found id: ""
	I1213 14:59:53.992883 1302865 logs.go:282] 0 containers: []
	W1213 14:59:53.992895 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:53.992900 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:53.992962 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:54.020494 1302865 cri.go:89] found id: ""
	I1213 14:59:54.020510 1302865 logs.go:282] 0 containers: []
	W1213 14:59:54.020518 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:54.020524 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:54.020587 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:54.047224 1302865 cri.go:89] found id: ""
	I1213 14:59:54.047240 1302865 logs.go:282] 0 containers: []
	W1213 14:59:54.047247 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:54.047256 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:54.047268 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:54.064625 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:54.064643 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:54.131051 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:54.122613   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.123186   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.124913   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.125456   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.126931   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:54.122613   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.123186   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.124913   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.125456   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:54.126931   16916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:54.131061 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:54.131072 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:54.198481 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:54.198502 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:54.229657 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:54.229673 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:56.788389 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:56.798893 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:56.798978 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:56.825463 1302865 cri.go:89] found id: ""
	I1213 14:59:56.825479 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.825486 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:56.825491 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:56.825569 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:56.850902 1302865 cri.go:89] found id: ""
	I1213 14:59:56.850916 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.850923 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:56.850928 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:56.850997 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:56.875729 1302865 cri.go:89] found id: ""
	I1213 14:59:56.875743 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.875750 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:56.875755 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:56.875812 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:56.904598 1302865 cri.go:89] found id: ""
	I1213 14:59:56.904612 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.904619 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:56.904624 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:56.904684 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:56.929612 1302865 cri.go:89] found id: ""
	I1213 14:59:56.929626 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.929633 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:56.929639 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:56.929696 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:56.954323 1302865 cri.go:89] found id: ""
	I1213 14:59:56.954337 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.954345 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:56.954350 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:56.954411 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:56.978916 1302865 cri.go:89] found id: ""
	I1213 14:59:56.978930 1302865 logs.go:282] 0 containers: []
	W1213 14:59:56.978937 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:56.978944 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 14:59:56.978955 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 14:59:56.996271 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 14:59:56.996290 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 14:59:57.067201 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 14:59:57.058255   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.058923   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.060482   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.061159   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.062082   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 14:59:57.058255   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.058923   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.060482   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.061159   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 14:59:57.062082   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 14:59:57.067214 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:57.067227 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:57.129467 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:57.129486 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 14:59:57.160756 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 14:59:57.160773 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 14:59:59.726541 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 14:59:59.737128 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 14:59:59.737192 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 14:59:59.762034 1302865 cri.go:89] found id: ""
	I1213 14:59:59.762050 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.762057 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 14:59:59.762063 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 14:59:59.762136 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 14:59:59.786710 1302865 cri.go:89] found id: ""
	I1213 14:59:59.786724 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.786731 1302865 logs.go:284] No container was found matching "etcd"
	I1213 14:59:59.786738 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 14:59:59.786799 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 14:59:59.823635 1302865 cri.go:89] found id: ""
	I1213 14:59:59.823649 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.823656 1302865 logs.go:284] No container was found matching "coredns"
	I1213 14:59:59.823661 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 14:59:59.823721 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 14:59:59.853555 1302865 cri.go:89] found id: ""
	I1213 14:59:59.853568 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.853576 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 14:59:59.853580 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 14:59:59.853639 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 14:59:59.878766 1302865 cri.go:89] found id: ""
	I1213 14:59:59.878781 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.878788 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 14:59:59.878793 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 14:59:59.878853 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 14:59:59.904985 1302865 cri.go:89] found id: ""
	I1213 14:59:59.904999 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.905006 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 14:59:59.905012 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 14:59:59.905084 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 14:59:59.929868 1302865 cri.go:89] found id: ""
	I1213 14:59:59.929882 1302865 logs.go:282] 0 containers: []
	W1213 14:59:59.929889 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 14:59:59.929896 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 14:59:59.929906 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 14:59:59.991222 1302865 logs.go:123] Gathering logs for container status ...
	I1213 14:59:59.991242 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:00:00.071719 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 15:00:00.071740 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:00:00.209914 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 15:00:00.209948 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:00:00.266871 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:00:00.266916 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:00:00.606023 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 15:00:00.575459   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.581155   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.582626   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.584864   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.585965   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 15:00:00.575459   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.581155   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.582626   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.584864   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:00:00.585965   17141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:00:03.107691 1302865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:00:03.118897 1302865 kubeadm.go:602] duration metric: took 4m4.796487812s to restartPrimaryControlPlane
	W1213 15:00:03.118966 1302865 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 15:00:03.119044 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 15:00:03.535783 1302865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 15:00:03.550485 1302865 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 15:00:03.558915 1302865 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 15:00:03.558988 1302865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 15:00:03.567415 1302865 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 15:00:03.567426 1302865 kubeadm.go:158] found existing configuration files:
	
	I1213 15:00:03.567481 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 15:00:03.576037 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 15:00:03.576097 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 15:00:03.584074 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 15:00:03.592593 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 15:00:03.592651 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 15:00:03.601062 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 15:00:03.609623 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 15:00:03.609683 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 15:00:03.617551 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 15:00:03.625819 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 15:00:03.625879 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 15:00:03.634092 1302865 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 15:00:03.677773 1302865 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 15:00:03.677823 1302865 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 15:00:03.751455 1302865 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 15:00:03.751520 1302865 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 15:00:03.751555 1302865 kubeadm.go:319] OS: Linux
	I1213 15:00:03.751599 1302865 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 15:00:03.751646 1302865 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 15:00:03.751692 1302865 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 15:00:03.751738 1302865 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 15:00:03.751785 1302865 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 15:00:03.751832 1302865 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 15:00:03.751877 1302865 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 15:00:03.751923 1302865 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 15:00:03.751968 1302865 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 15:00:03.818698 1302865 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 15:00:03.818804 1302865 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 15:00:03.818894 1302865 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 15:00:03.825177 1302865 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 15:00:03.828382 1302865 out.go:252]   - Generating certificates and keys ...
	I1213 15:00:03.828484 1302865 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 15:00:03.828568 1302865 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 15:00:03.828657 1302865 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 15:00:03.828722 1302865 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 15:00:03.828813 1302865 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 15:00:03.828870 1302865 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 15:00:03.828941 1302865 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 15:00:03.829005 1302865 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 15:00:03.829084 1302865 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 15:00:03.829160 1302865 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 15:00:03.829199 1302865 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 15:00:03.829258 1302865 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 15:00:04.177571 1302865 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 15:00:04.342429 1302865 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 15:00:04.668058 1302865 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 15:00:04.760444 1302865 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 15:00:05.013305 1302865 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 15:00:05.014367 1302865 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 15:00:05.019071 1302865 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 15:00:05.022340 1302865 out.go:252]   - Booting up control plane ...
	I1213 15:00:05.022442 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 15:00:05.022520 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 15:00:05.022586 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 15:00:05.042894 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 15:00:05.043146 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 15:00:05.050754 1302865 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 15:00:05.051023 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 15:00:05.051065 1302865 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 15:00:05.191860 1302865 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 15:00:05.191979 1302865 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 15:04:05.190333 1302865 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000252344s
	I1213 15:04:05.190362 1302865 kubeadm.go:319] 
	I1213 15:04:05.190420 1302865 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 15:04:05.190453 1302865 kubeadm.go:319] 	- The kubelet is not running
	I1213 15:04:05.190557 1302865 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 15:04:05.190562 1302865 kubeadm.go:319] 
	I1213 15:04:05.190665 1302865 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 15:04:05.190696 1302865 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 15:04:05.190726 1302865 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 15:04:05.190729 1302865 kubeadm.go:319] 
	I1213 15:04:05.195506 1302865 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 15:04:05.195924 1302865 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 15:04:05.196033 1302865 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 15:04:05.196267 1302865 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 15:04:05.196271 1302865 kubeadm.go:319] 
	I1213 15:04:05.196339 1302865 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 15:04:05.196471 1302865 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000252344s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 15:04:05.196557 1302865 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 15:04:05.613572 1302865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 15:04:05.627532 1302865 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 15:04:05.627586 1302865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 15:04:05.635470 1302865 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 15:04:05.635487 1302865 kubeadm.go:158] found existing configuration files:
	
	I1213 15:04:05.635549 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 15:04:05.643770 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 15:04:05.643832 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 15:04:05.651305 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 15:04:05.659066 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 15:04:05.659119 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 15:04:05.666497 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 15:04:05.674867 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 15:04:05.674922 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 15:04:05.682604 1302865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 15:04:05.690488 1302865 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 15:04:05.690547 1302865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 15:04:05.697863 1302865 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 15:04:05.737903 1302865 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 15:04:05.738332 1302865 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 15:04:05.824821 1302865 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 15:04:05.824881 1302865 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 15:04:05.824914 1302865 kubeadm.go:319] OS: Linux
	I1213 15:04:05.824955 1302865 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 15:04:05.825000 1302865 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 15:04:05.825043 1302865 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 15:04:05.825103 1302865 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 15:04:05.825147 1302865 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 15:04:05.825200 1302865 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 15:04:05.825250 1302865 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 15:04:05.825294 1302865 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 15:04:05.825336 1302865 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 15:04:05.892296 1302865 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 15:04:05.892418 1302865 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 15:04:05.892526 1302865 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 15:04:05.898143 1302865 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 15:04:05.903540 1302865 out.go:252]   - Generating certificates and keys ...
	I1213 15:04:05.903629 1302865 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 15:04:05.903698 1302865 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 15:04:05.903775 1302865 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 15:04:05.903837 1302865 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 15:04:05.903908 1302865 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 15:04:05.903958 1302865 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 15:04:05.904021 1302865 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 15:04:05.904084 1302865 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 15:04:05.904160 1302865 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 15:04:05.904234 1302865 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 15:04:05.904275 1302865 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 15:04:05.904330 1302865 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 15:04:05.992570 1302865 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 15:04:06.166280 1302865 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 15:04:06.244452 1302865 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 15:04:06.386969 1302865 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 15:04:06.630629 1302865 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 15:04:06.631865 1302865 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 15:04:06.635872 1302865 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 15:04:06.639278 1302865 out.go:252]   - Booting up control plane ...
	I1213 15:04:06.639389 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 15:04:06.639462 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 15:04:06.639523 1302865 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 15:04:06.659049 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 15:04:06.659158 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 15:04:06.666661 1302865 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 15:04:06.666977 1302865 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 15:04:06.667151 1302865 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 15:04:06.810085 1302865 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 15:04:06.810198 1302865 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 15:08:06.809904 1302865 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000225024s
	I1213 15:08:06.809924 1302865 kubeadm.go:319] 
	I1213 15:08:06.810412 1302865 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 15:08:06.810499 1302865 kubeadm.go:319] 	- The kubelet is not running
	I1213 15:08:06.810921 1302865 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 15:08:06.810931 1302865 kubeadm.go:319] 
	I1213 15:08:06.811146 1302865 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 15:08:06.811211 1302865 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 15:08:06.811291 1302865 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 15:08:06.811302 1302865 kubeadm.go:319] 
	I1213 15:08:06.814720 1302865 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 15:08:06.816724 1302865 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 15:08:06.816881 1302865 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 15:08:06.817212 1302865 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 15:08:06.817216 1302865 kubeadm.go:319] 
	I1213 15:08:06.817309 1302865 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 15:08:06.817355 1302865 kubeadm.go:403] duration metric: took 12m8.532180676s to StartCluster
	I1213 15:08:06.817385 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:08:06.817448 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:08:06.841821 1302865 cri.go:89] found id: ""
	I1213 15:08:06.841835 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.841841 1302865 logs.go:284] No container was found matching "kube-apiserver"
	I1213 15:08:06.841847 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:08:06.841909 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:08:06.865102 1302865 cri.go:89] found id: ""
	I1213 15:08:06.865122 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.865129 1302865 logs.go:284] No container was found matching "etcd"
	I1213 15:08:06.865134 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:08:06.865194 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:08:06.889354 1302865 cri.go:89] found id: ""
	I1213 15:08:06.889369 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.889376 1302865 logs.go:284] No container was found matching "coredns"
	I1213 15:08:06.889381 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:08:06.889444 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:08:06.916987 1302865 cri.go:89] found id: ""
	I1213 15:08:06.917001 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.917008 1302865 logs.go:284] No container was found matching "kube-scheduler"
	I1213 15:08:06.917014 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:08:06.917074 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:08:06.941966 1302865 cri.go:89] found id: ""
	I1213 15:08:06.941980 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.941987 1302865 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:08:06.941992 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:08:06.942053 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:08:06.967555 1302865 cri.go:89] found id: ""
	I1213 15:08:06.967570 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.967576 1302865 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 15:08:06.967582 1302865 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:08:06.967642 1302865 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:08:06.990643 1302865 cri.go:89] found id: ""
	I1213 15:08:06.990661 1302865 logs.go:282] 0 containers: []
	W1213 15:08:06.990669 1302865 logs.go:284] No container was found matching "kindnet"
	I1213 15:08:06.990677 1302865 logs.go:123] Gathering logs for kubelet ...
	I1213 15:08:06.990688 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:08:07.046948 1302865 logs.go:123] Gathering logs for dmesg ...
	I1213 15:08:07.046967 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:08:07.064271 1302865 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:08:07.064292 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:08:07.156681 1302865 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 15:08:07.142614   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.149501   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.150219   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.151350   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.152858   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 15:08:07.142614   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.149501   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.150219   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.151350   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:07.152858   20948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:08:07.156693 1302865 logs.go:123] Gathering logs for containerd ...
	I1213 15:08:07.156703 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:08:07.225180 1302865 logs.go:123] Gathering logs for container status ...
	I1213 15:08:07.225205 1302865 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 15:08:07.257292 1302865 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000225024s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 15:08:07.257342 1302865 out.go:285] * 
	W1213 15:08:07.257449 1302865 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000225024s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 15:08:07.257519 1302865 out.go:285] * 
	W1213 15:08:07.259853 1302865 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 15:08:07.265906 1302865 out.go:203] 
	W1213 15:08:07.268865 1302865 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000225024s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 15:08:07.268911 1302865 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 15:08:07.268933 1302865 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 15:08:07.272012 1302865 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371134322Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371145407Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371154235Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371164894Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371186333Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371215091Z" level=info msg="Connect containerd service"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.371566107Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.372148338Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.392820866Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.392994105Z" level=info msg="Start subscribing containerd event"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.393210215Z" level=info msg="Start recovering state"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.393152477Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.438865616Z" level=info msg="Start event monitor"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.439053460Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.439140720Z" level=info msg="Start streaming server"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.439202880Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.439258526Z" level=info msg="runtime interface starting up..."
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.439350397Z" level=info msg="starting plugins..."
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.439418867Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 14:55:56 functional-562018 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 14:55:56 functional-562018 containerd[9685]: time="2025-12-13T14:55:56.441778888Z" level=info msg="containerd successfully booted in 0.092313s"
	Dec 13 15:08:16 functional-562018 containerd[9685]: time="2025-12-13T15:08:16.603080956Z" level=info msg="No images store for sha256:3fb21f6d7fe9fd863c3548cb9498b8e552e958f0a50edc71e300f38a249a8021"
	Dec 13 15:08:16 functional-562018 containerd[9685]: time="2025-12-13T15:08:16.605435011Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-562018\""
	Dec 13 15:08:16 functional-562018 containerd[9685]: time="2025-12-13T15:08:16.614153614Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:08:16 functional-562018 containerd[9685]: time="2025-12-13T15:08:16.614640453Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-562018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 15:08:17.418539   21714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:17.419206   21714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:17.420797   21714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:17.421298   21714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 15:08:17.422871   21714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.759368] overlayfs: idmapped layers are currently not supported
	[Dec13 13:37] overlayfs: idmapped layers are currently not supported
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 15:08:17 up  6:50,  0 user,  load average: 0.93, 0.33, 0.49
	Linux functional-562018 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 15:08:13 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:08:14 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 330.
	Dec 13 15:08:14 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:08:14 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:08:14 functional-562018 kubelet[21468]: E1213 15:08:14.670314   21468 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:08:14 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:08:14 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:08:15 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 331.
	Dec 13 15:08:15 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:08:15 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:08:15 functional-562018 kubelet[21518]: E1213 15:08:15.395632   21518 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:08:15 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:08:15 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:08:16 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 332.
	Dec 13 15:08:16 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:08:16 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:08:16 functional-562018 kubelet[21548]: E1213 15:08:16.143139   21548 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:08:16 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:08:16 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:08:16 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 333.
	Dec 13 15:08:16 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:08:16 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:08:16 functional-562018 kubelet[21624]: E1213 15:08:16.892395   21624 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:08:16 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:08:16 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018: exit status 2 (411.463004ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-562018" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (3.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-562018 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-562018 create deployment hello-node --image kicbase/echo-server: exit status 1 (82.155114ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-562018 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562018 service list: exit status 103 (310.458994ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-562018 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-562018"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-562018 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-562018 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-562018\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562018 service list -o json: exit status 103 (295.914765ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-562018 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-562018"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-562018 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562018 service --namespace=default --https --url hello-node: exit status 103 (333.101636ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-562018 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-562018"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-562018 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562018 service hello-node --url --format={{.IP}}: exit status 103 (336.440888ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-562018 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-562018"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-562018 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-562018 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-562018\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562018 service hello-node --url: exit status 103 (343.196139ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-562018 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-562018"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-562018 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-562018 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-562018"
functional_test.go:1579: failed to parse "* The control-plane node functional-562018 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-562018\"": parse "* The control-plane node functional-562018 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-562018\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-562018 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-562018 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1213 15:08:23.227240 1317961 out.go:360] Setting OutFile to fd 1 ...
I1213 15:08:23.227604 1317961 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 15:08:23.227637 1317961 out.go:374] Setting ErrFile to fd 2...
I1213 15:08:23.227662 1317961 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 15:08:23.228031 1317961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
I1213 15:08:23.228384 1317961 mustload.go:66] Loading cluster: functional-562018
I1213 15:08:23.229411 1317961 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 15:08:23.230143 1317961 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
I1213 15:08:23.257951 1317961 host.go:66] Checking if "functional-562018" exists ...
I1213 15:08:23.258270 1317961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 15:08:23.359689 1317961 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 15:08:23.348161546 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1213 15:08:23.359805 1317961 api_server.go:166] Checking apiserver status ...
I1213 15:08:23.359865 1317961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1213 15:08:23.359908 1317961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
I1213 15:08:23.426714 1317961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
W1213 15:08:23.543248 1317961 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1213 15:08:23.546760 1317961 out.go:179] * The control-plane node functional-562018 apiserver is not running: (state=Stopped)
I1213 15:08:23.549765 1317961 out.go:179]   To start a cluster, run: "minikube start -p functional-562018"

                                                
                                                
stdout: * The control-plane node functional-562018 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-562018"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-562018 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-562018 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-562018 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-562018 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 1317960: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-562018 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-562018 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-562018 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-562018 apply -f testdata/testsvc.yaml: exit status 1 (154.793494ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-562018 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (97.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.109.93.161": Temporary Error: Get "http://10.109.93.161": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-562018 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-562018 get svc nginx-svc: exit status 1 (58.724316ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-562018 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (97.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-562018 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2700642734/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765638608439849039" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2700642734/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765638608439849039" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2700642734/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765638608439849039" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2700642734/001/test-1765638608439849039
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562018 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (375.61323ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 15:10:08.815748 1252934 retry.go:31] will retry after 265.610826ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 15:10 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 15:10 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 15:10 test-1765638608439849039
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh cat /mount-9p/test-1765638608439849039
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-562018 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-562018 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (54.917706ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-562018 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562018 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (295.390566ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=40061)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 13 15:10 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 13 15:10 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 13 15:10 test-1765638608439849039
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-562018 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-562018 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2700642734/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-562018 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2700642734/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2700642734/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:40061
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2700642734/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-562018 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2700642734/001:/mount-9p --alsologtostderr -v=1] stderr:
I1213 15:10:08.503760 1320265 out.go:360] Setting OutFile to fd 1 ...
I1213 15:10:08.504011 1320265 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 15:10:08.504036 1320265 out.go:374] Setting ErrFile to fd 2...
I1213 15:10:08.504052 1320265 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 15:10:08.504305 1320265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
I1213 15:10:08.504575 1320265 mustload.go:66] Loading cluster: functional-562018
I1213 15:10:08.504973 1320265 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 15:10:08.505609 1320265 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
I1213 15:10:08.526549 1320265 host.go:66] Checking if "functional-562018" exists ...
I1213 15:10:08.526875 1320265 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 15:10:08.653607 1320265 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 15:10:08.636466527 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1213 15:10:08.653755 1320265 cli_runner.go:164] Run: docker network inspect functional-562018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1213 15:10:08.688885 1320265 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2700642734/001 into VM as /mount-9p ...
I1213 15:10:08.692049 1320265 out.go:179]   - Mount type:   9p
I1213 15:10:08.695003 1320265 out.go:179]   - User ID:      docker
I1213 15:10:08.698034 1320265 out.go:179]   - Group ID:     docker
I1213 15:10:08.700932 1320265 out.go:179]   - Version:      9p2000.L
I1213 15:10:08.703798 1320265 out.go:179]   - Message Size: 262144
I1213 15:10:08.706695 1320265 out.go:179]   - Options:      map[]
I1213 15:10:08.709501 1320265 out.go:179]   - Bind Address: 192.168.49.1:40061
I1213 15:10:08.712342 1320265 out.go:179] * Userspace file server: 
I1213 15:10:08.712642 1320265 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1213 15:10:08.712731 1320265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
I1213 15:10:08.732542 1320265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
I1213 15:10:08.838385 1320265 mount.go:180] unmount for /mount-9p ran successfully
I1213 15:10:08.838413 1320265 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1213 15:10:08.846727 1320265 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=40061,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1213 15:10:08.856726 1320265 main.go:127] stdlog: ufs.go:141 connected
I1213 15:10:08.856892 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tversion tag 65535 msize 262144 version '9P2000.L'
I1213 15:10:08.856930 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rversion tag 65535 msize 262144 version '9P2000'
I1213 15:10:08.857153 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1213 15:10:08.857213 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rattach tag 0 aqid (c9d63e 18431635 'd')
I1213 15:10:08.857483 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tstat tag 0 fid 0
I1213 15:10:08.857548 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d63e 18431635 'd') m d775 at 0 mt 1765638608 l 4096 t 0 d 0 ext )
I1213 15:10:08.858710 1320265 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/.mount-process: {Name:mkd5edd605f6f9f640c5115210e2959c6ab7e0e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 15:10:08.858904 1320265 mount.go:105] mount successful: ""
I1213 15:10:08.862335 1320265 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2700642734/001 to /mount-9p
I1213 15:10:08.865297 1320265 out.go:203] 
I1213 15:10:08.868083 1320265 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1213 15:10:09.619840 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tstat tag 0 fid 0
I1213 15:10:09.619920 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d63e 18431635 'd') m d775 at 0 mt 1765638608 l 4096 t 0 d 0 ext )
I1213 15:10:09.620298 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Twalk tag 0 fid 0 newfid 1 
I1213 15:10:09.620339 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rwalk tag 0 
I1213 15:10:09.620466 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Topen tag 0 fid 1 mode 0
I1213 15:10:09.620521 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Ropen tag 0 qid (c9d63e 18431635 'd') iounit 0
I1213 15:10:09.620663 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tstat tag 0 fid 0
I1213 15:10:09.620726 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d63e 18431635 'd') m d775 at 0 mt 1765638608 l 4096 t 0 d 0 ext )
I1213 15:10:09.620879 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tread tag 0 fid 1 offset 0 count 262120
I1213 15:10:09.621010 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rread tag 0 count 258
I1213 15:10:09.621158 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tread tag 0 fid 1 offset 258 count 261862
I1213 15:10:09.621189 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rread tag 0 count 0
I1213 15:10:09.621321 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tread tag 0 fid 1 offset 258 count 262120
I1213 15:10:09.621350 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rread tag 0 count 0
I1213 15:10:09.621479 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Twalk tag 0 fid 0 newfid 2 0:'test-1765638608439849039' 
I1213 15:10:09.621513 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rwalk tag 0 (c9d641 18431635 '') 
I1213 15:10:09.621640 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tstat tag 0 fid 2
I1213 15:10:09.621672 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rstat tag 0 st ('test-1765638608439849039' 'jenkins' 'jenkins' '' q (c9d641 18431635 '') m 644 at 0 mt 1765638608 l 24 t 0 d 0 ext )
I1213 15:10:09.621809 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tstat tag 0 fid 2
I1213 15:10:09.621843 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rstat tag 0 st ('test-1765638608439849039' 'jenkins' 'jenkins' '' q (c9d641 18431635 '') m 644 at 0 mt 1765638608 l 24 t 0 d 0 ext )
I1213 15:10:09.621971 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tclunk tag 0 fid 2
I1213 15:10:09.621996 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rclunk tag 0
I1213 15:10:09.622120 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1213 15:10:09.622154 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rwalk tag 0 (c9d63f 18431635 '') 
I1213 15:10:09.622285 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tstat tag 0 fid 2
I1213 15:10:09.622318 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d63f 18431635 '') m 644 at 0 mt 1765638608 l 24 t 0 d 0 ext )
I1213 15:10:09.622437 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tstat tag 0 fid 2
I1213 15:10:09.622471 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d63f 18431635 '') m 644 at 0 mt 1765638608 l 24 t 0 d 0 ext )
I1213 15:10:09.622597 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tclunk tag 0 fid 2
I1213 15:10:09.622621 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rclunk tag 0
I1213 15:10:09.622751 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1213 15:10:09.622788 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rwalk tag 0 (c9d640 18431635 '') 
I1213 15:10:09.622908 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tstat tag 0 fid 2
I1213 15:10:09.622942 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d640 18431635 '') m 644 at 0 mt 1765638608 l 24 t 0 d 0 ext )
I1213 15:10:09.623055 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tstat tag 0 fid 2
I1213 15:10:09.623120 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d640 18431635 '') m 644 at 0 mt 1765638608 l 24 t 0 d 0 ext )
I1213 15:10:09.623231 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tclunk tag 0 fid 2
I1213 15:10:09.623255 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rclunk tag 0
I1213 15:10:09.623453 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tread tag 0 fid 1 offset 258 count 262120
I1213 15:10:09.623490 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rread tag 0 count 0
I1213 15:10:09.623617 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tclunk tag 0 fid 1
I1213 15:10:09.623654 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rclunk tag 0
I1213 15:10:09.892018 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Twalk tag 0 fid 0 newfid 1 0:'test-1765638608439849039' 
I1213 15:10:09.892101 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rwalk tag 0 (c9d641 18431635 '') 
I1213 15:10:09.892295 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tstat tag 0 fid 1
I1213 15:10:09.892341 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rstat tag 0 st ('test-1765638608439849039' 'jenkins' 'jenkins' '' q (c9d641 18431635 '') m 644 at 0 mt 1765638608 l 24 t 0 d 0 ext )
I1213 15:10:09.892501 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Twalk tag 0 fid 1 newfid 2 
I1213 15:10:09.892532 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rwalk tag 0 
I1213 15:10:09.892645 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Topen tag 0 fid 2 mode 0
I1213 15:10:09.892691 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Ropen tag 0 qid (c9d641 18431635 '') iounit 0
I1213 15:10:09.892850 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tstat tag 0 fid 1
I1213 15:10:09.892890 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rstat tag 0 st ('test-1765638608439849039' 'jenkins' 'jenkins' '' q (c9d641 18431635 '') m 644 at 0 mt 1765638608 l 24 t 0 d 0 ext )
I1213 15:10:09.893039 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tread tag 0 fid 2 offset 0 count 262120
I1213 15:10:09.893086 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rread tag 0 count 24
I1213 15:10:09.893217 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tread tag 0 fid 2 offset 24 count 262120
I1213 15:10:09.893244 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rread tag 0 count 0
I1213 15:10:09.893378 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tread tag 0 fid 2 offset 24 count 262120
I1213 15:10:09.893423 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rread tag 0 count 0
I1213 15:10:09.893584 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tclunk tag 0 fid 2
I1213 15:10:09.893627 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rclunk tag 0
I1213 15:10:09.893819 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tclunk tag 0 fid 1
I1213 15:10:09.893855 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rclunk tag 0
I1213 15:10:10.246477 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tstat tag 0 fid 0
I1213 15:10:10.246558 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d63e 18431635 'd') m d775 at 0 mt 1765638608 l 4096 t 0 d 0 ext )
I1213 15:10:10.246926 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Twalk tag 0 fid 0 newfid 1 
I1213 15:10:10.246978 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rwalk tag 0 
I1213 15:10:10.247127 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Topen tag 0 fid 1 mode 0
I1213 15:10:10.247195 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Ropen tag 0 qid (c9d63e 18431635 'd') iounit 0
I1213 15:10:10.247339 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tstat tag 0 fid 0
I1213 15:10:10.247385 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d63e 18431635 'd') m d775 at 0 mt 1765638608 l 4096 t 0 d 0 ext )
I1213 15:10:10.247586 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tread tag 0 fid 1 offset 0 count 262120
I1213 15:10:10.247712 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rread tag 0 count 258
I1213 15:10:10.247852 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tread tag 0 fid 1 offset 258 count 261862
I1213 15:10:10.247884 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rread tag 0 count 0
I1213 15:10:10.248051 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tread tag 0 fid 1 offset 258 count 262120
I1213 15:10:10.248100 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rread tag 0 count 0
I1213 15:10:10.248276 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Twalk tag 0 fid 0 newfid 2 0:'test-1765638608439849039' 
I1213 15:10:10.248317 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rwalk tag 0 (c9d641 18431635 '') 
I1213 15:10:10.248469 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tstat tag 0 fid 2
I1213 15:10:10.248508 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rstat tag 0 st ('test-1765638608439849039' 'jenkins' 'jenkins' '' q (c9d641 18431635 '') m 644 at 0 mt 1765638608 l 24 t 0 d 0 ext )
I1213 15:10:10.248644 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tstat tag 0 fid 2
I1213 15:10:10.248677 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rstat tag 0 st ('test-1765638608439849039' 'jenkins' 'jenkins' '' q (c9d641 18431635 '') m 644 at 0 mt 1765638608 l 24 t 0 d 0 ext )
I1213 15:10:10.248797 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tclunk tag 0 fid 2
I1213 15:10:10.248835 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rclunk tag 0
I1213 15:10:10.249034 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1213 15:10:10.249088 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rwalk tag 0 (c9d63f 18431635 '') 
I1213 15:10:10.249263 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tstat tag 0 fid 2
I1213 15:10:10.249323 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d63f 18431635 '') m 644 at 0 mt 1765638608 l 24 t 0 d 0 ext )
I1213 15:10:10.249472 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tstat tag 0 fid 2
I1213 15:10:10.249505 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d63f 18431635 '') m 644 at 0 mt 1765638608 l 24 t 0 d 0 ext )
I1213 15:10:10.249620 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tclunk tag 0 fid 2
I1213 15:10:10.249645 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rclunk tag 0
I1213 15:10:10.249791 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1213 15:10:10.249826 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rwalk tag 0 (c9d640 18431635 '') 
I1213 15:10:10.249942 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tstat tag 0 fid 2
I1213 15:10:10.249977 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d640 18431635 '') m 644 at 0 mt 1765638608 l 24 t 0 d 0 ext )
I1213 15:10:10.250089 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tstat tag 0 fid 2
I1213 15:10:10.250121 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d640 18431635 '') m 644 at 0 mt 1765638608 l 24 t 0 d 0 ext )
I1213 15:10:10.250234 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tclunk tag 0 fid 2
I1213 15:10:10.250264 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rclunk tag 0
I1213 15:10:10.250427 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tread tag 0 fid 1 offset 258 count 262120
I1213 15:10:10.250469 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rread tag 0 count 0
I1213 15:10:10.250616 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tclunk tag 0 fid 1
I1213 15:10:10.250662 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rclunk tag 0
I1213 15:10:10.251982 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1213 15:10:10.252057 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rerror tag 0 ename 'file not found' ecode 0
I1213 15:10:10.532752 1320265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:33578 Tclunk tag 0 fid 0
I1213 15:10:10.532805 1320265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:33578 Rclunk tag 0
I1213 15:10:10.533817 1320265 main.go:127] stdlog: ufs.go:147 disconnected
I1213 15:10:10.557494 1320265 out.go:179] * Unmounting /mount-9p ...
I1213 15:10:10.560525 1320265 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1213 15:10:10.567615 1320265 mount.go:180] unmount for /mount-9p ran successfully
I1213 15:10:10.567720 1320265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/.mount-process: {Name:mkd5edd605f6f9f640c5115210e2959c6ab7e0e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 15:10:10.570825 1320265 out.go:203] 
W1213 15:10:10.573859 1320265 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1213 15:10:10.576697 1320265 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.22s)

                                                
                                    
x
+
TestKubernetesUpgrade (798.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-098313 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1213 15:40:18.171135 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-098313 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.826688789s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-098313
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-098313: (1.383424089s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-098313 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-098313 status --format={{.Host}}: exit status 7 (96.761526ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-098313 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-098313 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (12m34.337635421s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-098313] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-098313" primary control-plane node in "kubernetes-upgrade-098313" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 15:40:32.280796 1450159 out.go:360] Setting OutFile to fd 1 ...
	I1213 15:40:32.281353 1450159 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:40:32.281387 1450159 out.go:374] Setting ErrFile to fd 2...
	I1213 15:40:32.281407 1450159 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:40:32.281708 1450159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 15:40:32.282131 1450159 out.go:368] Setting JSON to false
	I1213 15:40:32.283139 1450159 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":26581,"bootTime":1765613851,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 15:40:32.283245 1450159 start.go:143] virtualization:  
	I1213 15:40:32.286734 1450159 out.go:179] * [kubernetes-upgrade-098313] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 15:40:32.290553 1450159 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 15:40:32.290632 1450159 notify.go:221] Checking for updates...
	I1213 15:40:32.294392 1450159 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 15:40:32.297804 1450159 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 15:40:32.300779 1450159 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 15:40:32.303709 1450159 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 15:40:32.307235 1450159 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 15:40:32.310590 1450159 config.go:182] Loaded profile config "kubernetes-upgrade-098313": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1213 15:40:32.311412 1450159 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 15:40:32.354867 1450159 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 15:40:32.354997 1450159 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 15:40:32.469355 1450159 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 15:40:32.455239157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 15:40:32.469458 1450159 docker.go:319] overlay module found
	I1213 15:40:32.472470 1450159 out.go:179] * Using the docker driver based on existing profile
	I1213 15:40:32.475145 1450159 start.go:309] selected driver: docker
	I1213 15:40:32.475160 1450159 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-098313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-098313 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 15:40:32.475248 1450159 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 15:40:32.476009 1450159 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 15:40:32.562342 1450159 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 15:40:32.552893322 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 15:40:32.562671 1450159 cni.go:84] Creating CNI manager for ""
	I1213 15:40:32.562725 1450159 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 15:40:32.562761 1450159 start.go:353] cluster config:
	{Name:kubernetes-upgrade-098313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-098313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 15:40:32.565934 1450159 out.go:179] * Starting "kubernetes-upgrade-098313" primary control-plane node in "kubernetes-upgrade-098313" cluster
	I1213 15:40:32.568773 1450159 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 15:40:32.571870 1450159 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 15:40:32.574826 1450159 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 15:40:32.574890 1450159 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 15:40:32.574900 1450159 cache.go:65] Caching tarball of preloaded images
	I1213 15:40:32.575010 1450159 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 15:40:32.575020 1450159 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 15:40:32.575138 1450159 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/kubernetes-upgrade-098313/config.json ...
	I1213 15:40:32.575396 1450159 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 15:40:32.601905 1450159 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 15:40:32.601928 1450159 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 15:40:32.601942 1450159 cache.go:243] Successfully downloaded all kic artifacts
	I1213 15:40:32.601970 1450159 start.go:360] acquireMachinesLock for kubernetes-upgrade-098313: {Name:mka4024acc223c0f8d1fffee7b7b7e1eeef5fb0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 15:40:32.602043 1450159 start.go:364] duration metric: took 43.035µs to acquireMachinesLock for "kubernetes-upgrade-098313"
	I1213 15:40:32.602067 1450159 start.go:96] Skipping create...Using existing machine configuration
	I1213 15:40:32.602073 1450159 fix.go:54] fixHost starting: 
	I1213 15:40:32.602389 1450159 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-098313 --format={{.State.Status}}
	I1213 15:40:32.631877 1450159 fix.go:112] recreateIfNeeded on kubernetes-upgrade-098313: state=Stopped err=<nil>
	W1213 15:40:32.631907 1450159 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 15:40:32.635277 1450159 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-098313" ...
	I1213 15:40:32.635395 1450159 cli_runner.go:164] Run: docker start kubernetes-upgrade-098313
	I1213 15:40:32.983137 1450159 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-098313 --format={{.State.Status}}
	I1213 15:40:33.009723 1450159 kic.go:430] container "kubernetes-upgrade-098313" state is running.
	I1213 15:40:33.010273 1450159 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-098313
	I1213 15:40:33.039102 1450159 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/kubernetes-upgrade-098313/config.json ...
	I1213 15:40:33.039447 1450159 machine.go:94] provisionDockerMachine start ...
	I1213 15:40:33.039538 1450159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-098313
	I1213 15:40:33.080525 1450159 main.go:143] libmachine: Using SSH client type: native
	I1213 15:40:33.080861 1450159 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34143 <nil> <nil>}
	I1213 15:40:33.080870 1450159 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 15:40:33.081853 1450159 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39400->127.0.0.1:34143: read: connection reset by peer
	I1213 15:40:36.239518 1450159 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-098313
	
	I1213 15:40:36.239549 1450159 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-098313"
	I1213 15:40:36.239611 1450159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-098313
	I1213 15:40:36.258706 1450159 main.go:143] libmachine: Using SSH client type: native
	I1213 15:40:36.259010 1450159 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34143 <nil> <nil>}
	I1213 15:40:36.259027 1450159 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-098313 && echo "kubernetes-upgrade-098313" | sudo tee /etc/hostname
	I1213 15:40:36.425237 1450159 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-098313
	
	I1213 15:40:36.425322 1450159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-098313
	I1213 15:40:36.444881 1450159 main.go:143] libmachine: Using SSH client type: native
	I1213 15:40:36.445312 1450159 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34143 <nil> <nil>}
	I1213 15:40:36.445338 1450159 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-098313' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-098313/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-098313' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 15:40:36.595850 1450159 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 15:40:36.595934 1450159 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 15:40:36.595971 1450159 ubuntu.go:190] setting up certificates
	I1213 15:40:36.595996 1450159 provision.go:84] configureAuth start
	I1213 15:40:36.596084 1450159 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-098313
	I1213 15:40:36.614673 1450159 provision.go:143] copyHostCerts
	I1213 15:40:36.614757 1450159 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 15:40:36.614770 1450159 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 15:40:36.614846 1450159 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 15:40:36.614944 1450159 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 15:40:36.614955 1450159 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 15:40:36.614983 1450159 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 15:40:36.615056 1450159 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 15:40:36.615065 1450159 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 15:40:36.615090 1450159 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 15:40:36.615149 1450159 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-098313 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-098313 localhost minikube]
	I1213 15:40:36.788203 1450159 provision.go:177] copyRemoteCerts
	I1213 15:40:36.788275 1450159 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 15:40:36.788327 1450159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-098313
	I1213 15:40:36.805854 1450159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34143 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/kubernetes-upgrade-098313/id_rsa Username:docker}
	I1213 15:40:36.911417 1450159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 15:40:36.931385 1450159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 15:40:36.950347 1450159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1213 15:40:36.968683 1450159 provision.go:87] duration metric: took 372.666546ms to configureAuth
	I1213 15:40:36.968755 1450159 ubuntu.go:206] setting minikube options for container-runtime
	I1213 15:40:36.968964 1450159 config.go:182] Loaded profile config "kubernetes-upgrade-098313": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 15:40:36.968979 1450159 machine.go:97] duration metric: took 3.929521067s to provisionDockerMachine
	I1213 15:40:36.968988 1450159 start.go:293] postStartSetup for "kubernetes-upgrade-098313" (driver="docker")
	I1213 15:40:36.969000 1450159 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 15:40:36.969062 1450159 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 15:40:36.969108 1450159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-098313
	I1213 15:40:36.986755 1450159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34143 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/kubernetes-upgrade-098313/id_rsa Username:docker}
	I1213 15:40:37.095503 1450159 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 15:40:37.098781 1450159 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 15:40:37.098811 1450159 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 15:40:37.098823 1450159 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 15:40:37.098877 1450159 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 15:40:37.098952 1450159 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 15:40:37.099055 1450159 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 15:40:37.106387 1450159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 15:40:37.124461 1450159 start.go:296] duration metric: took 155.456717ms for postStartSetup
	I1213 15:40:37.124601 1450159 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 15:40:37.124649 1450159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-098313
	I1213 15:40:37.142341 1450159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34143 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/kubernetes-upgrade-098313/id_rsa Username:docker}
	I1213 15:40:37.245318 1450159 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 15:40:37.250189 1450159 fix.go:56] duration metric: took 4.648109178s for fixHost
	I1213 15:40:37.250217 1450159 start.go:83] releasing machines lock for "kubernetes-upgrade-098313", held for 4.648165398s
	I1213 15:40:37.250297 1450159 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-098313
	I1213 15:40:37.267413 1450159 ssh_runner.go:195] Run: cat /version.json
	I1213 15:40:37.267480 1450159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-098313
	I1213 15:40:37.267748 1450159 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 15:40:37.267801 1450159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-098313
	I1213 15:40:37.289690 1450159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34143 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/kubernetes-upgrade-098313/id_rsa Username:docker}
	I1213 15:40:37.306501 1450159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34143 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/kubernetes-upgrade-098313/id_rsa Username:docker}
	I1213 15:40:37.392756 1450159 ssh_runner.go:195] Run: systemctl --version
	I1213 15:40:37.510000 1450159 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 15:40:37.514303 1450159 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 15:40:37.514400 1450159 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 15:40:37.524540 1450159 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 15:40:37.524619 1450159 start.go:496] detecting cgroup driver to use...
	I1213 15:40:37.524664 1450159 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 15:40:37.524729 1450159 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 15:40:37.542581 1450159 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 15:40:37.556453 1450159 docker.go:218] disabling cri-docker service (if available) ...
	I1213 15:40:37.556533 1450159 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 15:40:37.572271 1450159 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 15:40:37.585506 1450159 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 15:40:37.701835 1450159 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 15:40:37.824203 1450159 docker.go:234] disabling docker service ...
	I1213 15:40:37.824301 1450159 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 15:40:37.839074 1450159 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 15:40:37.852148 1450159 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 15:40:37.971561 1450159 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 15:40:38.102191 1450159 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 15:40:38.116359 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 15:40:38.130795 1450159 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 15:40:38.141666 1450159 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 15:40:38.150543 1450159 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 15:40:38.150610 1450159 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 15:40:38.159471 1450159 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 15:40:38.168566 1450159 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 15:40:38.177648 1450159 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 15:40:38.186618 1450159 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 15:40:38.194948 1450159 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 15:40:38.204378 1450159 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 15:40:38.212989 1450159 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 15:40:38.221973 1450159 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 15:40:38.230282 1450159 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 15:40:38.238243 1450159 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 15:40:38.347689 1450159 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 15:40:38.513585 1450159 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 15:40:38.513672 1450159 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 15:40:38.517649 1450159 start.go:564] Will wait 60s for crictl version
	I1213 15:40:38.517716 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:40:38.521561 1450159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 15:40:38.545077 1450159 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 15:40:38.545155 1450159 ssh_runner.go:195] Run: containerd --version
	I1213 15:40:38.572389 1450159 ssh_runner.go:195] Run: containerd --version
	I1213 15:40:38.595785 1450159 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 15:40:38.598739 1450159 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-098313 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 15:40:38.616122 1450159 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 15:40:38.619906 1450159 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 15:40:38.629649 1450159 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-098313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-098313 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 15:40:38.629767 1450159 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 15:40:38.629835 1450159 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 15:40:38.657776 1450159 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1213 15:40:38.657853 1450159 ssh_runner.go:195] Run: which lz4
	I1213 15:40:38.661439 1450159 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 15:40:38.664916 1450159 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 15:40:38.664951 1450159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (305624510 bytes)
	I1213 15:40:41.745703 1450159 containerd.go:563] duration metric: took 3.084307568s to copy over tarball
	I1213 15:40:41.745794 1450159 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 15:40:43.900841 1450159 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.155018872s)
	I1213 15:40:43.900909 1450159 kubeadm.go:910] preload failed, will try to load cached images: extracting tarball: 
	** stderr ** 
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	
	** /stderr **: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: Process exited with status 2
	stdout:
	
	stderr:
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	I1213 15:40:43.900989 1450159 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 15:40:43.931424 1450159 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1213 15:40:43.931451 1450159 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 15:40:43.931511 1450159 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 15:40:43.931733 1450159 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 15:40:43.931823 1450159 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 15:40:43.931931 1450159 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 15:40:43.932023 1450159 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 15:40:43.932132 1450159 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1213 15:40:43.932211 1450159 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1213 15:40:43.932332 1450159 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 15:40:43.935494 1450159 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 15:40:43.935895 1450159 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 15:40:43.936040 1450159 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 15:40:43.936173 1450159 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 15:40:43.936318 1450159 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 15:40:43.936483 1450159 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 15:40:43.936728 1450159 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1213 15:40:43.936998 1450159 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1213 15:40:44.270451 1450159 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1213 15:40:44.270556 1450159 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 15:40:44.283721 1450159 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1213 15:40:44.283819 1450159 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1213 15:40:44.299175 1450159 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1213 15:40:44.299256 1450159 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 15:40:44.340108 1450159 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1213 15:40:44.340153 1450159 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1213 15:40:44.340204 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:40:44.340258 1450159 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1213 15:40:44.340275 1450159 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 15:40:44.340326 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:40:44.340391 1450159 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1213 15:40:44.340411 1450159 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 15:40:44.340440 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:40:44.348379 1450159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 15:40:44.348452 1450159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 15:40:44.349271 1450159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 15:40:44.357922 1450159 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1213 15:40:44.357994 1450159 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 15:40:44.374802 1450159 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1213 15:40:44.374878 1450159 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 15:40:44.400940 1450159 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1213 15:40:44.401018 1450159 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1213 15:40:44.404938 1450159 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1213 15:40:44.405077 1450159 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1213 15:40:44.430317 1450159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 15:40:44.430399 1450159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 15:40:44.430457 1450159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 15:40:44.430518 1450159 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1213 15:40:44.430557 1450159 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 15:40:44.430585 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:40:44.431238 1450159 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1213 15:40:44.431269 1450159 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 15:40:44.431383 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:40:44.504931 1450159 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1213 15:40:44.505007 1450159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 15:40:44.505016 1450159 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1213 15:40:44.505104 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:40:44.505109 1450159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 15:40:44.505172 1450159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 15:40:44.505212 1450159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 15:40:44.505255 1450159 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1213 15:40:44.505279 1450159 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 15:40:44.505311 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:40:44.505360 1450159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 15:40:44.582399 1450159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 15:40:44.582504 1450159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 15:40:44.582412 1450159 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1213 15:40:44.582609 1450159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 15:40:44.582641 1450159 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1213 15:40:44.582690 1450159 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1213 15:40:44.582765 1450159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 15:40:44.582802 1450159 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1213 15:40:44.640424 1450159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 15:40:44.640510 1450159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 15:40:44.640555 1450159 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1213 15:40:44.640596 1450159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1213 15:40:44.640672 1450159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 15:40:44.644389 1450159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 15:40:44.711959 1450159 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1213 15:40:44.712027 1450159 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1213 15:40:44.716441 1450159 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1213 15:40:44.716531 1450159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 15:40:44.716532 1450159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 15:40:44.720509 1450159 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1213 15:40:44.870409 1450159 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1213 15:40:44.870410 1450159 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1213 15:40:44.870538 1450159 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1213 15:40:44.874328 1450159 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1213 15:40:44.874363 1450159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1213 15:40:45.051282 1450159 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1213 15:40:45.051442 1450159 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	W1213 15:40:45.183358 1450159 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1213 15:40:45.183545 1450159 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1213 15:40:45.183632 1450159 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 15:40:45.828137 1450159 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1213 15:40:45.828188 1450159 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 15:40:45.828265 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:40:45.832066 1450159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 15:40:45.966955 1450159 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1213 15:40:45.967077 1450159 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1213 15:40:45.971250 1450159 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1213 15:40:45.971296 1450159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1213 15:40:46.083768 1450159 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1213 15:40:46.083890 1450159 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1213 15:40:46.479733 1450159 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1213 15:40:46.479793 1450159 cache_images.go:94] duration metric: took 2.548327541s to LoadCachedImages
	W1213 15:40:46.479861 1450159 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0: no such file or directory
	I1213 15:40:46.479875 1450159 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 15:40:46.479970 1450159 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-098313 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-098313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 15:40:46.480036 1450159 ssh_runner.go:195] Run: sudo crictl info
	I1213 15:40:46.508640 1450159 cni.go:84] Creating CNI manager for ""
	I1213 15:40:46.508663 1450159 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 15:40:46.508679 1450159 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 15:40:46.508725 1450159 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-098313 NodeName:kubernetes-upgrade-098313 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/
certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 15:40:46.508874 1450159 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-098313"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 15:40:46.508952 1450159 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 15:40:46.518119 1450159 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 15:40:46.518234 1450159 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 15:40:46.525890 1450159 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (336 bytes)
	I1213 15:40:46.539565 1450159 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 15:40:46.552476 1450159 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1213 15:40:46.566680 1450159 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 15:40:46.570462 1450159 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 15:40:46.581081 1450159 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 15:40:46.708875 1450159 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 15:40:46.727968 1450159 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/kubernetes-upgrade-098313 for IP: 192.168.76.2
	I1213 15:40:46.727993 1450159 certs.go:195] generating shared ca certs ...
	I1213 15:40:46.728009 1450159 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 15:40:46.728218 1450159 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 15:40:46.728285 1450159 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 15:40:46.728308 1450159 certs.go:257] generating profile certs ...
	I1213 15:40:46.728431 1450159 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/kubernetes-upgrade-098313/client.key
	I1213 15:40:46.728524 1450159 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/kubernetes-upgrade-098313/apiserver.key.2db807c6
	I1213 15:40:46.728592 1450159 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/kubernetes-upgrade-098313/proxy-client.key
	I1213 15:40:46.728732 1450159 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 15:40:46.728787 1450159 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 15:40:46.728803 1450159 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 15:40:46.728845 1450159 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 15:40:46.728895 1450159 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 15:40:46.728928 1450159 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 15:40:46.728995 1450159 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 15:40:46.729598 1450159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 15:40:46.750236 1450159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 15:40:46.775058 1450159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 15:40:46.799223 1450159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 15:40:46.817875 1450159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/kubernetes-upgrade-098313/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1213 15:40:46.836577 1450159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/kubernetes-upgrade-098313/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 15:40:46.854141 1450159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/kubernetes-upgrade-098313/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 15:40:46.873035 1450159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/kubernetes-upgrade-098313/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 15:40:46.891461 1450159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 15:40:46.909824 1450159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 15:40:46.928322 1450159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 15:40:46.946318 1450159 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 15:40:46.959525 1450159 ssh_runner.go:195] Run: openssl version
	I1213 15:40:46.968997 1450159 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 15:40:46.977239 1450159 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 15:40:46.985513 1450159 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 15:40:46.989965 1450159 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 15:40:46.990033 1450159 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 15:40:47.032734 1450159 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 15:40:47.040793 1450159 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 15:40:47.049115 1450159 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 15:40:47.057948 1450159 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 15:40:47.062273 1450159 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 15:40:47.062372 1450159 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 15:40:47.103933 1450159 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 15:40:47.111455 1450159 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 15:40:47.118857 1450159 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 15:40:47.126686 1450159 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 15:40:47.131152 1450159 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 15:40:47.131227 1450159 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 15:40:47.172671 1450159 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 15:40:47.180328 1450159 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 15:40:47.184383 1450159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 15:40:47.226246 1450159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 15:40:47.268624 1450159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 15:40:47.310866 1450159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 15:40:47.353109 1450159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 15:40:47.395659 1450159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 15:40:47.437617 1450159 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-098313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-098313 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 15:40:47.437703 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 15:40:47.437789 1450159 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 15:40:47.464476 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:40:47.464502 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:40:47.464507 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:40:47.464511 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:40:47.464515 1450159 cri.go:89] found id: ""
	I1213 15:40:47.464568 1450159 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1213 15:40:47.486056 1450159 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T15:40:47Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1213 15:40:47.486126 1450159 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 15:40:47.494174 1450159 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 15:40:47.494194 1450159 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 15:40:47.494276 1450159 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 15:40:47.502473 1450159 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 15:40:47.503043 1450159 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-098313" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 15:40:47.503287 1450159 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-1251074/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-098313" cluster setting kubeconfig missing "kubernetes-upgrade-098313" context setting]
	I1213 15:40:47.503755 1450159 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 15:40:47.504448 1450159 kapi.go:59] client config for kubernetes-upgrade-098313: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/kubernetes-upgrade-098313/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/kubernetes-upgrade-098313/client.key", CAFile:"/home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 15:40:47.504955 1450159 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 15:40:47.504975 1450159 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 15:40:47.504982 1450159 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 15:40:47.504987 1450159 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 15:40:47.504992 1450159 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 15:40:47.505261 1450159 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 15:40:47.515736 1450159 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 15:40:09.706306013 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 15:40:46.561437059 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///run/containerd/containerd.sock
	   name: "kubernetes-upgrade-098313"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1213 15:40:47.515758 1450159 kubeadm.go:1161] stopping kube-system containers ...
	I1213 15:40:47.515770 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1213 15:40:47.515834 1450159 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 15:40:47.543040 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:40:47.543064 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:40:47.543069 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:40:47.543076 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:40:47.543080 1450159 cri.go:89] found id: ""
	I1213 15:40:47.543085 1450159 cri.go:252] Stopping containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:40:47.543154 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:40:47.547530 1450159 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6
	I1213 15:40:47.586679 1450159 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 15:40:47.611479 1450159 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 15:40:47.619895 1450159 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec 13 15:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec 13 15:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 13 15:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 13 15:40 /etc/kubernetes/scheduler.conf
	
	I1213 15:40:47.619987 1450159 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 15:40:47.629338 1450159 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 15:40:47.637574 1450159 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 15:40:47.646507 1450159 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 15:40:47.646574 1450159 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 15:40:47.655687 1450159 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 15:40:47.664162 1450159 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 15:40:47.664236 1450159 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 15:40:47.672148 1450159 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 15:40:47.680186 1450159 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 15:40:47.738584 1450159 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 15:40:49.164916 1450159 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.426296968s)
	I1213 15:40:49.164986 1450159 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 15:40:49.407972 1450159 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 15:40:49.475704 1450159 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 15:40:49.522981 1450159 api_server.go:52] waiting for apiserver process to appear ...
	I1213 15:40:49.523101 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:40:50.023964 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:40:50.523202 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:40:51.023349 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:40:51.524110 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:40:52.023769 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:40:52.523632 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:40:53.023258 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:40:53.523900 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:40:54.023896 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:40:54.523172 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:40:55.023270 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:40:55.523710 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:40:56.023605 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:40:56.523210 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:40:57.024161 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:40:57.523636 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:40:58.023282 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:40:58.524169 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:40:59.023256 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:40:59.523244 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:00.023726 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:00.523243 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:01.023265 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:01.524259 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:02.023254 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:02.523764 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:03.024178 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:03.524088 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:04.023919 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:04.524048 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:05.024009 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:05.523354 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:06.024205 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:06.523939 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:07.024038 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:07.523350 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:08.023601 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:08.523430 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:09.024171 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:09.523190 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:10.024133 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:10.523262 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:11.023494 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:11.523743 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:12.023810 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:12.523597 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:13.023239 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:13.523428 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:14.023466 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:14.523569 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:15.023699 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:15.523368 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:16.024045 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:16.523473 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:17.023443 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:17.523542 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:18.023344 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:18.523803 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:19.023686 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:19.523485 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:20.023779 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:20.524317 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:21.023443 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:21.523461 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:22.024070 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:22.523569 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:23.024179 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:23.523272 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:24.023278 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:24.524125 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:25.024008 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:25.523483 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:26.023852 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:26.524155 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:27.023238 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:27.523301 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:28.023289 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:28.523978 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:29.023938 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:29.523230 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:30.036541 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:30.524167 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:31.024214 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:31.523290 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:32.023190 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:32.523279 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:33.023277 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:33.523484 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:34.024212 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:34.524204 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:35.023983 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:35.523237 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:36.023380 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:36.523181 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:37.023360 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:37.523175 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:38.023226 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:38.523254 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:39.023270 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:39.524036 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:40.023717 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:40.523208 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:41.023833 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:41.523782 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:42.023832 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:42.523799 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:43.023292 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:43.523250 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:44.024168 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:44.523411 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:45.023247 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:45.523227 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:46.024071 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:46.523436 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:47.023914 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:47.523555 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:48.023429 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:48.523235 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:49.023539 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:49.523774 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:41:49.523963 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:41:49.555560 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:41:49.555633 1450159 cri.go:89] found id: ""
	I1213 15:41:49.555655 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:41:49.555746 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:41:49.560670 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:41:49.560798 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:41:49.620368 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:41:49.620444 1450159 cri.go:89] found id: ""
	I1213 15:41:49.620464 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:41:49.620555 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:41:49.625221 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:41:49.625358 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:41:49.661837 1450159 cri.go:89] found id: ""
	I1213 15:41:49.661860 1450159 logs.go:282] 0 containers: []
	W1213 15:41:49.661868 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:41:49.661874 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:41:49.661933 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:41:49.694335 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:41:49.694355 1450159 cri.go:89] found id: ""
	I1213 15:41:49.694363 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:41:49.694421 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:41:49.701864 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:41:49.701936 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:41:49.743293 1450159 cri.go:89] found id: ""
	I1213 15:41:49.743392 1450159 logs.go:282] 0 containers: []
	W1213 15:41:49.743402 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:41:49.743410 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:41:49.743469 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:41:49.773939 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:41:49.773959 1450159 cri.go:89] found id: ""
	I1213 15:41:49.773969 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:41:49.774026 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:41:49.780135 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:41:49.780265 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:41:49.809283 1450159 cri.go:89] found id: ""
	I1213 15:41:49.809359 1450159 logs.go:282] 0 containers: []
	W1213 15:41:49.809382 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:41:49.809400 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:41:49.809491 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:41:49.846954 1450159 cri.go:89] found id: ""
	I1213 15:41:49.847031 1450159 logs.go:282] 0 containers: []
	W1213 15:41:49.847053 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:41:49.847078 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:41:49.847118 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:41:49.920607 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:41:49.920685 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:41:49.971166 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:41:49.971243 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:41:50.009009 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:41:50.009105 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:41:50.076370 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:41:50.076400 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:41:50.165942 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:41:50.166002 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:41:50.257515 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:41:50.257537 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:41:50.257550 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:41:50.306214 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:41:50.306292 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:41:50.330058 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:41:50.330173 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:41:52.960236 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:52.974781 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:41:52.974850 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:41:53.017868 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:41:53.017889 1450159 cri.go:89] found id: ""
	I1213 15:41:53.017898 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:41:53.017956 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:41:53.025892 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:41:53.025965 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:41:53.082163 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:41:53.082183 1450159 cri.go:89] found id: ""
	I1213 15:41:53.082191 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:41:53.082253 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:41:53.086407 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:41:53.086479 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:41:53.124398 1450159 cri.go:89] found id: ""
	I1213 15:41:53.124425 1450159 logs.go:282] 0 containers: []
	W1213 15:41:53.124433 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:41:53.124439 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:41:53.124499 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:41:53.167072 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:41:53.167096 1450159 cri.go:89] found id: ""
	I1213 15:41:53.167104 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:41:53.167174 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:41:53.171410 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:41:53.171484 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:41:53.213025 1450159 cri.go:89] found id: ""
	I1213 15:41:53.213051 1450159 logs.go:282] 0 containers: []
	W1213 15:41:53.213060 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:41:53.213067 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:41:53.213133 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:41:53.248881 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:41:53.248906 1450159 cri.go:89] found id: ""
	I1213 15:41:53.248914 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:41:53.248982 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:41:53.253324 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:41:53.253399 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:41:53.286069 1450159 cri.go:89] found id: ""
	I1213 15:41:53.286096 1450159 logs.go:282] 0 containers: []
	W1213 15:41:53.286105 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:41:53.286111 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:41:53.286177 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:41:53.316746 1450159 cri.go:89] found id: ""
	I1213 15:41:53.316773 1450159 logs.go:282] 0 containers: []
	W1213 15:41:53.316782 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:41:53.316796 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:41:53.316807 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:41:53.368711 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:41:53.368749 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:41:53.448650 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:41:53.448687 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:41:53.493692 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:41:53.493721 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:41:53.552096 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:41:53.552139 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:41:53.598704 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:41:53.598734 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:41:53.707296 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:41:53.707334 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:41:53.707348 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:41:53.746891 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:41:53.746926 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:41:53.789173 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:41:53.789210 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:41:56.341484 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:56.355996 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:41:56.356060 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:41:56.384734 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:41:56.384755 1450159 cri.go:89] found id: ""
	I1213 15:41:56.384764 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:41:56.384824 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:41:56.388525 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:41:56.388593 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:41:56.432982 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:41:56.433001 1450159 cri.go:89] found id: ""
	I1213 15:41:56.433009 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:41:56.433061 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:41:56.437543 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:41:56.437628 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:41:56.469448 1450159 cri.go:89] found id: ""
	I1213 15:41:56.469471 1450159 logs.go:282] 0 containers: []
	W1213 15:41:56.469479 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:41:56.469486 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:41:56.469539 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:41:56.505735 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:41:56.505754 1450159 cri.go:89] found id: ""
	I1213 15:41:56.505762 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:41:56.505819 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:41:56.510179 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:41:56.510252 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:41:56.560520 1450159 cri.go:89] found id: ""
	I1213 15:41:56.560545 1450159 logs.go:282] 0 containers: []
	W1213 15:41:56.560553 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:41:56.560560 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:41:56.560622 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:41:56.638944 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:41:56.638963 1450159 cri.go:89] found id: ""
	I1213 15:41:56.638971 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:41:56.639042 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:41:56.645115 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:41:56.645184 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:41:56.706182 1450159 cri.go:89] found id: ""
	I1213 15:41:56.706203 1450159 logs.go:282] 0 containers: []
	W1213 15:41:56.706213 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:41:56.706219 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:41:56.706275 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:41:56.740127 1450159 cri.go:89] found id: ""
	I1213 15:41:56.740155 1450159 logs.go:282] 0 containers: []
	W1213 15:41:56.740163 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:41:56.740177 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:41:56.740188 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:41:56.762297 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:41:56.762369 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:41:56.832155 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:41:56.832234 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:41:56.881552 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:41:56.881635 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:41:56.929110 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:41:56.929139 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:41:57.002118 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:41:57.002167 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:41:57.101604 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:41:57.101627 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:41:57.101642 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:41:57.150168 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:41:57.150201 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:41:57.207083 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:41:57.207114 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:41:59.751486 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:41:59.764161 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:41:59.764235 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:41:59.799723 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:41:59.799746 1450159 cri.go:89] found id: ""
	I1213 15:41:59.799755 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:41:59.799815 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:41:59.803834 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:41:59.803912 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:41:59.840905 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:41:59.840925 1450159 cri.go:89] found id: ""
	I1213 15:41:59.840933 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:41:59.840988 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:41:59.849981 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:41:59.850073 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:41:59.876103 1450159 cri.go:89] found id: ""
	I1213 15:41:59.876166 1450159 logs.go:282] 0 containers: []
	W1213 15:41:59.876182 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:41:59.876190 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:41:59.876253 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:41:59.910166 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:41:59.910192 1450159 cri.go:89] found id: ""
	I1213 15:41:59.910202 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:41:59.910260 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:41:59.914759 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:41:59.914821 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:41:59.943794 1450159 cri.go:89] found id: ""
	I1213 15:41:59.943814 1450159 logs.go:282] 0 containers: []
	W1213 15:41:59.943822 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:41:59.943829 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:41:59.943888 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:41:59.970292 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:41:59.970316 1450159 cri.go:89] found id: ""
	I1213 15:41:59.970326 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:41:59.970385 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:41:59.974282 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:41:59.974357 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:42:00.032876 1450159 cri.go:89] found id: ""
	I1213 15:42:00.032902 1450159 logs.go:282] 0 containers: []
	W1213 15:42:00.032911 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:42:00.032917 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:42:00.032992 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:42:00.180789 1450159 cri.go:89] found id: ""
	I1213 15:42:00.180816 1450159 logs.go:282] 0 containers: []
	W1213 15:42:00.180825 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:42:00.180841 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:42:00.180854 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:42:00.261848 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:42:00.261881 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:42:00.389870 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:42:00.389902 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:42:00.389919 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:00.464908 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:42:00.464949 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:00.530305 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:42:00.530380 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:42:00.591403 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:42:00.591478 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:00.627657 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:42:00.627738 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:00.663028 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:42:00.663101 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:42:00.699062 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:42:00.699095 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:42:03.236991 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:42:03.248585 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:42:03.248662 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:42:03.283541 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:03.283567 1450159 cri.go:89] found id: ""
	I1213 15:42:03.283576 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:42:03.283638 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:03.289665 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:42:03.289746 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:42:03.323951 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:03.323978 1450159 cri.go:89] found id: ""
	I1213 15:42:03.323987 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:42:03.324048 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:03.329008 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:42:03.329112 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:42:03.363261 1450159 cri.go:89] found id: ""
	I1213 15:42:03.363290 1450159 logs.go:282] 0 containers: []
	W1213 15:42:03.363300 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:42:03.363306 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:42:03.363381 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:42:03.409358 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:03.409385 1450159 cri.go:89] found id: ""
	I1213 15:42:03.409394 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:42:03.409455 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:03.414143 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:42:03.414222 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:42:03.452980 1450159 cri.go:89] found id: ""
	I1213 15:42:03.453007 1450159 logs.go:282] 0 containers: []
	W1213 15:42:03.453018 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:42:03.453027 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:42:03.453091 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:42:03.494987 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:03.495014 1450159 cri.go:89] found id: ""
	I1213 15:42:03.495023 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:42:03.495107 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:03.499496 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:42:03.499602 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:42:03.559190 1450159 cri.go:89] found id: ""
	I1213 15:42:03.559217 1450159 logs.go:282] 0 containers: []
	W1213 15:42:03.559226 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:42:03.559232 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:42:03.559290 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:42:03.612953 1450159 cri.go:89] found id: ""
	I1213 15:42:03.612982 1450159 logs.go:282] 0 containers: []
	W1213 15:42:03.612991 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:42:03.613006 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:42:03.613018 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:42:03.719593 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:42:03.719630 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:42:03.737768 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:42:03.737802 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:42:03.834801 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:42:03.834835 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:42:03.834848 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:03.892368 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:42:03.892405 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:03.955754 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:42:03.955789 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:04.014635 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:42:04.014712 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:42:04.063769 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:42:04.063805 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:04.129302 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:42:04.129381 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:42:06.674331 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:42:06.691088 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:42:06.691158 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:42:06.733714 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:06.733732 1450159 cri.go:89] found id: ""
	I1213 15:42:06.733747 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:42:06.733803 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:06.739893 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:42:06.739964 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:42:06.790726 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:06.790745 1450159 cri.go:89] found id: ""
	I1213 15:42:06.790754 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:42:06.790811 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:06.798968 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:42:06.799087 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:42:06.844857 1450159 cri.go:89] found id: ""
	I1213 15:42:06.844921 1450159 logs.go:282] 0 containers: []
	W1213 15:42:06.844944 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:42:06.844963 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:42:06.845049 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:42:06.904090 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:06.904114 1450159 cri.go:89] found id: ""
	I1213 15:42:06.904130 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:42:06.904194 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:06.908387 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:42:06.908472 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:42:06.948099 1450159 cri.go:89] found id: ""
	I1213 15:42:06.948127 1450159 logs.go:282] 0 containers: []
	W1213 15:42:06.948149 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:42:06.948159 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:42:06.948237 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:42:07.000501 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:07.000526 1450159 cri.go:89] found id: ""
	I1213 15:42:07.000551 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:42:07.000616 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:07.007662 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:42:07.007749 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:42:07.049857 1450159 cri.go:89] found id: ""
	I1213 15:42:07.049885 1450159 logs.go:282] 0 containers: []
	W1213 15:42:07.049901 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:42:07.049912 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:42:07.049982 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:42:07.094179 1450159 cri.go:89] found id: ""
	I1213 15:42:07.094217 1450159 logs.go:282] 0 containers: []
	W1213 15:42:07.094226 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:42:07.094241 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:42:07.094255 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:42:07.114662 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:42:07.114694 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:42:07.242819 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:42:07.242851 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:42:07.242864 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:42:07.315486 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:42:07.315527 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:07.402995 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:42:07.403083 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:07.478056 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:42:07.478088 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:07.549087 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:42:07.549173 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:07.614973 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:42:07.615004 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:42:07.657074 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:42:07.657115 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:42:10.236394 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:42:10.253777 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:42:10.253849 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:42:10.332193 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:10.332217 1450159 cri.go:89] found id: ""
	I1213 15:42:10.332226 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:42:10.332300 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:10.336774 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:42:10.336856 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:42:10.365991 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:10.366024 1450159 cri.go:89] found id: ""
	I1213 15:42:10.366032 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:42:10.366095 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:10.370418 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:42:10.370499 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:42:10.398652 1450159 cri.go:89] found id: ""
	I1213 15:42:10.398680 1450159 logs.go:282] 0 containers: []
	W1213 15:42:10.398689 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:42:10.398701 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:42:10.398785 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:42:10.455336 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:10.455361 1450159 cri.go:89] found id: ""
	I1213 15:42:10.455370 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:42:10.455437 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:10.459943 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:42:10.460033 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:42:10.504458 1450159 cri.go:89] found id: ""
	I1213 15:42:10.504484 1450159 logs.go:282] 0 containers: []
	W1213 15:42:10.504493 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:42:10.504499 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:42:10.504558 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:42:10.553927 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:10.553971 1450159 cri.go:89] found id: ""
	I1213 15:42:10.553980 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:42:10.554055 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:10.564041 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:42:10.564142 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:42:10.606322 1450159 cri.go:89] found id: ""
	I1213 15:42:10.606349 1450159 logs.go:282] 0 containers: []
	W1213 15:42:10.606375 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:42:10.606382 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:42:10.606452 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:42:10.665537 1450159 cri.go:89] found id: ""
	I1213 15:42:10.665574 1450159 logs.go:282] 0 containers: []
	W1213 15:42:10.665584 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:42:10.665598 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:42:10.665612 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:10.737966 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:42:10.738003 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:42:10.784718 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:42:10.784758 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:42:10.804611 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:42:10.804643 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:10.870504 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:42:10.870547 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:42:10.960103 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:42:10.960169 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:42:11.050498 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:42:11.050581 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:42:11.203349 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:42:11.203371 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:42:11.203385 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:11.264359 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:42:11.264394 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:13.823426 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:42:13.834994 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:42:13.835074 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:42:13.877310 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:13.877336 1450159 cri.go:89] found id: ""
	I1213 15:42:13.877346 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:42:13.877410 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:13.882807 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:42:13.882885 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:42:13.938720 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:13.938751 1450159 cri.go:89] found id: ""
	I1213 15:42:13.938760 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:42:13.938818 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:13.948332 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:42:13.948411 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:42:13.977978 1450159 cri.go:89] found id: ""
	I1213 15:42:13.978001 1450159 logs.go:282] 0 containers: []
	W1213 15:42:13.978009 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:42:13.978016 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:42:13.978075 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:42:14.021836 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:14.021859 1450159 cri.go:89] found id: ""
	I1213 15:42:14.021868 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:42:14.021930 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:14.028734 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:42:14.028806 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:42:14.065824 1450159 cri.go:89] found id: ""
	I1213 15:42:14.065847 1450159 logs.go:282] 0 containers: []
	W1213 15:42:14.065856 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:42:14.065862 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:42:14.065927 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:42:14.097321 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:14.097395 1450159 cri.go:89] found id: ""
	I1213 15:42:14.097418 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:42:14.097504 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:14.103536 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:42:14.103655 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:42:14.134969 1450159 cri.go:89] found id: ""
	I1213 15:42:14.135044 1450159 logs.go:282] 0 containers: []
	W1213 15:42:14.135068 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:42:14.135094 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:42:14.135171 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:42:14.171725 1450159 cri.go:89] found id: ""
	I1213 15:42:14.171801 1450159 logs.go:282] 0 containers: []
	W1213 15:42:14.171824 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:42:14.171860 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:42:14.171889 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:42:14.246046 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:42:14.246137 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:42:14.265758 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:42:14.265846 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:42:14.381035 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:42:14.381104 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:42:14.381138 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:14.472794 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:42:14.472867 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:14.533684 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:42:14.533786 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:42:14.578351 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:42:14.579103 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:42:14.626190 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:42:14.626260 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:14.666605 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:42:14.666683 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:17.215626 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:42:17.229344 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:42:17.229438 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:42:17.266373 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:17.266401 1450159 cri.go:89] found id: ""
	I1213 15:42:17.266410 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:42:17.266485 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:17.270743 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:42:17.270834 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:42:17.312920 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:17.312940 1450159 cri.go:89] found id: ""
	I1213 15:42:17.312948 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:42:17.313002 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:17.316981 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:42:17.317049 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:42:17.343596 1450159 cri.go:89] found id: ""
	I1213 15:42:17.343620 1450159 logs.go:282] 0 containers: []
	W1213 15:42:17.343628 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:42:17.343635 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:42:17.343700 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:42:17.377580 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:17.377604 1450159 cri.go:89] found id: ""
	I1213 15:42:17.377612 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:42:17.377668 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:17.381702 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:42:17.381773 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:42:17.409507 1450159 cri.go:89] found id: ""
	I1213 15:42:17.409530 1450159 logs.go:282] 0 containers: []
	W1213 15:42:17.409538 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:42:17.409544 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:42:17.409605 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:42:17.443611 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:17.443635 1450159 cri.go:89] found id: ""
	I1213 15:42:17.443642 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:42:17.443697 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:17.447765 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:42:17.447839 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:42:17.481587 1450159 cri.go:89] found id: ""
	I1213 15:42:17.481616 1450159 logs.go:282] 0 containers: []
	W1213 15:42:17.481625 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:42:17.481631 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:42:17.481705 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:42:17.514064 1450159 cri.go:89] found id: ""
	I1213 15:42:17.514085 1450159 logs.go:282] 0 containers: []
	W1213 15:42:17.514094 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:42:17.514109 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:42:17.514121 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:17.556265 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:42:17.556305 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:17.615023 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:42:17.615062 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:42:17.690081 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:42:17.690111 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:42:17.761792 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:42:17.761827 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:17.807138 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:42:17.807190 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:17.843234 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:42:17.843472 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:42:17.873794 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:42:17.873831 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:42:17.891583 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:42:17.891616 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:42:17.964610 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:42:20.466269 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:42:20.478092 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:42:20.478166 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:42:20.523894 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:20.523912 1450159 cri.go:89] found id: ""
	I1213 15:42:20.523920 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:42:20.523975 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:20.528296 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:42:20.528371 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:42:20.569554 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:20.569573 1450159 cri.go:89] found id: ""
	I1213 15:42:20.569581 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:42:20.569638 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:20.574112 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:42:20.574234 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:42:20.614730 1450159 cri.go:89] found id: ""
	I1213 15:42:20.614752 1450159 logs.go:282] 0 containers: []
	W1213 15:42:20.614760 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:42:20.614766 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:42:20.614834 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:42:20.690729 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:20.690750 1450159 cri.go:89] found id: ""
	I1213 15:42:20.690758 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:42:20.690818 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:20.694954 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:42:20.695026 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:42:20.729781 1450159 cri.go:89] found id: ""
	I1213 15:42:20.729803 1450159 logs.go:282] 0 containers: []
	W1213 15:42:20.729811 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:42:20.729817 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:42:20.729884 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:42:20.758633 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:20.758702 1450159 cri.go:89] found id: ""
	I1213 15:42:20.758726 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:42:20.758817 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:20.762986 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:42:20.763117 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:42:20.801773 1450159 cri.go:89] found id: ""
	I1213 15:42:20.801848 1450159 logs.go:282] 0 containers: []
	W1213 15:42:20.801872 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:42:20.801891 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:42:20.801977 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:42:20.837741 1450159 cri.go:89] found id: ""
	I1213 15:42:20.837827 1450159 logs.go:282] 0 containers: []
	W1213 15:42:20.837850 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:42:20.837875 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:42:20.837913 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:42:20.898794 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:42:20.898870 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:42:20.916525 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:42:20.916555 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:20.970303 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:42:20.970382 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:21.008529 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:42:21.008609 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:42:21.044100 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:42:21.044179 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:42:21.123503 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:42:21.123573 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:42:21.123613 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:21.173993 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:42:21.174067 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:21.221734 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:42:21.221810 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:42:23.778731 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:42:23.789302 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:42:23.789397 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:42:23.816550 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:23.816615 1450159 cri.go:89] found id: ""
	I1213 15:42:23.816637 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:42:23.816719 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:23.821375 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:42:23.821448 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:42:23.848570 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:23.848604 1450159 cri.go:89] found id: ""
	I1213 15:42:23.848614 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:42:23.848692 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:23.852748 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:42:23.852851 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:42:23.878971 1450159 cri.go:89] found id: ""
	I1213 15:42:23.878998 1450159 logs.go:282] 0 containers: []
	W1213 15:42:23.879007 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:42:23.879013 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:42:23.879099 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:42:23.905705 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:23.905730 1450159 cri.go:89] found id: ""
	I1213 15:42:23.905739 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:42:23.905815 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:23.909683 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:42:23.909780 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:42:23.934987 1450159 cri.go:89] found id: ""
	I1213 15:42:23.935058 1450159 logs.go:282] 0 containers: []
	W1213 15:42:23.935083 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:42:23.935102 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:42:23.935191 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:42:23.964901 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:23.964971 1450159 cri.go:89] found id: ""
	I1213 15:42:23.964988 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:42:23.965049 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:23.968911 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:42:23.969062 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:42:23.995347 1450159 cri.go:89] found id: ""
	I1213 15:42:23.995411 1450159 logs.go:282] 0 containers: []
	W1213 15:42:23.995436 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:42:23.995454 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:42:23.995537 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:42:24.025920 1450159 cri.go:89] found id: ""
	I1213 15:42:24.026002 1450159 logs.go:282] 0 containers: []
	W1213 15:42:24.026037 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:42:24.026065 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:42:24.026100 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:42:24.043888 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:42:24.043921 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:42:24.115442 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:42:24.115507 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:42:24.115534 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:24.149673 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:42:24.149708 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:24.186444 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:42:24.186493 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:24.225484 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:42:24.225521 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:42:24.287220 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:42:24.287256 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:24.325736 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:42:24.325776 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:42:24.358981 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:42:24.359086 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:42:26.892526 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:42:26.906352 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:42:26.906444 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:42:26.949617 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:26.949644 1450159 cri.go:89] found id: ""
	I1213 15:42:26.949652 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:42:26.949736 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:26.955483 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:42:26.955606 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:42:26.987195 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:26.987215 1450159 cri.go:89] found id: ""
	I1213 15:42:26.987224 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:42:26.987281 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:26.995508 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:42:26.995606 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:42:27.032974 1450159 cri.go:89] found id: ""
	I1213 15:42:27.033001 1450159 logs.go:282] 0 containers: []
	W1213 15:42:27.033010 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:42:27.033016 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:42:27.033080 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:42:27.063458 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:27.063480 1450159 cri.go:89] found id: ""
	I1213 15:42:27.063488 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:42:27.063548 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:27.067347 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:42:27.067424 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:42:27.110086 1450159 cri.go:89] found id: ""
	I1213 15:42:27.110111 1450159 logs.go:282] 0 containers: []
	W1213 15:42:27.110120 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:42:27.110125 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:42:27.110188 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:42:27.147750 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:27.147771 1450159 cri.go:89] found id: ""
	I1213 15:42:27.147779 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:42:27.147841 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:27.152331 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:42:27.152401 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:42:27.199054 1450159 cri.go:89] found id: ""
	I1213 15:42:27.199077 1450159 logs.go:282] 0 containers: []
	W1213 15:42:27.199085 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:42:27.199091 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:42:27.199153 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:42:27.228360 1450159 cri.go:89] found id: ""
	I1213 15:42:27.228385 1450159 logs.go:282] 0 containers: []
	W1213 15:42:27.228393 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:42:27.228414 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:42:27.228426 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:27.277598 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:42:27.277630 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:42:27.350808 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:42:27.350891 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:27.390161 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:42:27.390201 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:27.438641 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:42:27.438687 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:42:27.474436 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:42:27.474523 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:42:27.510454 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:42:27.510486 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:42:27.528671 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:42:27.528743 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:42:27.618838 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:42:27.618870 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:42:27.618898 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:30.167581 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:42:30.180670 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:42:30.180743 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:42:30.215019 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:30.215040 1450159 cri.go:89] found id: ""
	I1213 15:42:30.215049 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:42:30.215107 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:30.219653 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:42:30.219728 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:42:30.258398 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:30.258476 1450159 cri.go:89] found id: ""
	I1213 15:42:30.258500 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:42:30.258589 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:30.264322 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:42:30.264399 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:42:30.312382 1450159 cri.go:89] found id: ""
	I1213 15:42:30.312404 1450159 logs.go:282] 0 containers: []
	W1213 15:42:30.312413 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:42:30.312420 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:42:30.312485 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:42:30.372240 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:30.372260 1450159 cri.go:89] found id: ""
	I1213 15:42:30.372268 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:42:30.372335 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:30.386422 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:42:30.386501 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:42:30.430676 1450159 cri.go:89] found id: ""
	I1213 15:42:30.430701 1450159 logs.go:282] 0 containers: []
	W1213 15:42:30.430709 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:42:30.430715 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:42:30.430775 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:42:30.478511 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:30.478531 1450159 cri.go:89] found id: ""
	I1213 15:42:30.478539 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:42:30.478602 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:30.484534 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:42:30.484660 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:42:30.515832 1450159 cri.go:89] found id: ""
	I1213 15:42:30.515856 1450159 logs.go:282] 0 containers: []
	W1213 15:42:30.515864 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:42:30.515870 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:42:30.515931 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:42:30.546401 1450159 cri.go:89] found id: ""
	I1213 15:42:30.546681 1450159 logs.go:282] 0 containers: []
	W1213 15:42:30.546718 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:42:30.546750 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:42:30.546775 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:30.590972 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:42:30.591045 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:30.643259 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:42:30.643350 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:30.688736 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:42:30.688810 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:42:30.724865 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:42:30.724942 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:42:30.790581 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:42:30.790664 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:42:30.809604 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:42:30.809636 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:30.856518 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:42:30.856550 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:42:30.920589 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:42:30.920626 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:42:31.020355 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:42:33.520671 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:42:33.530907 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:42:33.530989 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:42:33.561709 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:33.561734 1450159 cri.go:89] found id: ""
	I1213 15:42:33.561743 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:42:33.561809 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:33.566252 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:42:33.566340 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:42:33.597753 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:33.597787 1450159 cri.go:89] found id: ""
	I1213 15:42:33.597796 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:42:33.597865 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:33.602320 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:42:33.602400 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:42:33.632670 1450159 cri.go:89] found id: ""
	I1213 15:42:33.632697 1450159 logs.go:282] 0 containers: []
	W1213 15:42:33.632705 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:42:33.632711 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:42:33.632777 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:42:33.658986 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:33.659010 1450159 cri.go:89] found id: ""
	I1213 15:42:33.659018 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:42:33.659084 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:33.663355 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:42:33.663437 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:42:33.690123 1450159 cri.go:89] found id: ""
	I1213 15:42:33.690161 1450159 logs.go:282] 0 containers: []
	W1213 15:42:33.690170 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:42:33.690176 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:42:33.690246 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:42:33.720658 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:33.720683 1450159 cri.go:89] found id: ""
	I1213 15:42:33.720692 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:42:33.720766 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:33.724803 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:42:33.724901 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:42:33.757882 1450159 cri.go:89] found id: ""
	I1213 15:42:33.757916 1450159 logs.go:282] 0 containers: []
	W1213 15:42:33.757929 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:42:33.757935 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:42:33.758012 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:42:33.794032 1450159 cri.go:89] found id: ""
	I1213 15:42:33.794060 1450159 logs.go:282] 0 containers: []
	W1213 15:42:33.794071 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:42:33.794086 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:42:33.794106 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:42:33.824072 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:42:33.824103 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:42:33.891891 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:42:33.891932 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:42:33.912008 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:42:33.912039 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:42:33.951770 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:42:33.951801 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:42:34.039457 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:42:34.039482 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:42:34.039499 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:34.086448 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:42:34.086482 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:34.166293 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:42:34.166328 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:34.225482 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:42:34.225521 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:36.767443 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:42:36.778383 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:42:36.778455 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:42:36.813850 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:36.813870 1450159 cri.go:89] found id: ""
	I1213 15:42:36.813878 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:42:36.813935 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:36.818138 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:42:36.818213 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:42:36.853133 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:36.853153 1450159 cri.go:89] found id: ""
	I1213 15:42:36.853162 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:42:36.853219 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:36.857474 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:42:36.857594 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:42:36.890429 1450159 cri.go:89] found id: ""
	I1213 15:42:36.890501 1450159 logs.go:282] 0 containers: []
	W1213 15:42:36.890525 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:42:36.890542 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:42:36.890630 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:42:36.920487 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:36.920558 1450159 cri.go:89] found id: ""
	I1213 15:42:36.920593 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:42:36.920684 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:36.925190 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:42:36.925312 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:42:36.966002 1450159 cri.go:89] found id: ""
	I1213 15:42:36.966077 1450159 logs.go:282] 0 containers: []
	W1213 15:42:36.966100 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:42:36.966118 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:42:36.966205 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:42:37.014766 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:37.014842 1450159 cri.go:89] found id: ""
	I1213 15:42:37.014868 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:42:37.014966 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:37.020237 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:42:37.020392 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:42:37.053044 1450159 cri.go:89] found id: ""
	I1213 15:42:37.053124 1450159 logs.go:282] 0 containers: []
	W1213 15:42:37.053162 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:42:37.053199 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:42:37.053301 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:42:37.101585 1450159 cri.go:89] found id: ""
	I1213 15:42:37.101657 1450159 logs.go:282] 0 containers: []
	W1213 15:42:37.101679 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:42:37.101706 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:42:37.101748 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:42:37.225926 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:42:37.226011 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:42:37.323459 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:42:37.323534 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:42:37.323564 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:37.372491 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:42:37.372570 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:42:37.403550 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:42:37.403585 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:42:37.419855 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:42:37.419925 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:37.453981 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:42:37.454057 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:37.489619 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:42:37.489695 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:37.539239 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:42:37.539428 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:42:40.075912 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:42:40.089831 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:42:40.089901 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:42:40.125048 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:40.125069 1450159 cri.go:89] found id: ""
	I1213 15:42:40.125077 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:42:40.125136 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:40.129728 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:42:40.129822 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:42:40.158017 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:40.158040 1450159 cri.go:89] found id: ""
	I1213 15:42:40.158049 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:42:40.158109 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:40.162694 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:42:40.162827 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:42:40.192976 1450159 cri.go:89] found id: ""
	I1213 15:42:40.192999 1450159 logs.go:282] 0 containers: []
	W1213 15:42:40.193008 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:42:40.193015 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:42:40.193079 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:42:40.221632 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:40.221652 1450159 cri.go:89] found id: ""
	I1213 15:42:40.221660 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:42:40.221716 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:40.225927 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:42:40.226048 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:42:40.253480 1450159 cri.go:89] found id: ""
	I1213 15:42:40.253559 1450159 logs.go:282] 0 containers: []
	W1213 15:42:40.253583 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:42:40.253602 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:42:40.253695 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:42:40.281031 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:40.281106 1450159 cri.go:89] found id: ""
	I1213 15:42:40.281129 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:42:40.281212 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:40.285392 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:42:40.285512 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:42:40.320744 1450159 cri.go:89] found id: ""
	I1213 15:42:40.320828 1450159 logs.go:282] 0 containers: []
	W1213 15:42:40.320851 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:42:40.320886 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:42:40.320970 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:42:40.369136 1450159 cri.go:89] found id: ""
	I1213 15:42:40.369218 1450159 logs.go:282] 0 containers: []
	W1213 15:42:40.369240 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:42:40.369270 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:42:40.369310 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:40.459244 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:42:40.459372 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:40.507969 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:42:40.508049 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:42:40.555252 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:42:40.555346 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:42:40.619035 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:42:40.619131 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:42:40.636082 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:42:40.636164 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:42:40.716088 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:42:40.716147 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:42:40.716184 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:40.766525 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:42:40.766599 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:40.808834 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:42:40.808913 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:42:43.340414 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:42:43.355062 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:42:43.355169 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:42:43.395034 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:43.395053 1450159 cri.go:89] found id: ""
	I1213 15:42:43.395061 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:42:43.395119 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:43.404504 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:42:43.404579 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:42:43.461657 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:43.461676 1450159 cri.go:89] found id: ""
	I1213 15:42:43.461684 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:42:43.461740 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:43.466427 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:42:43.466549 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:42:43.509809 1450159 cri.go:89] found id: ""
	I1213 15:42:43.509882 1450159 logs.go:282] 0 containers: []
	W1213 15:42:43.509905 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:42:43.509925 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:42:43.510029 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:42:43.568811 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:43.568873 1450159 cri.go:89] found id: ""
	I1213 15:42:43.568905 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:42:43.568997 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:43.577172 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:42:43.577303 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:42:43.623279 1450159 cri.go:89] found id: ""
	I1213 15:42:43.623360 1450159 logs.go:282] 0 containers: []
	W1213 15:42:43.623383 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:42:43.623402 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:42:43.623487 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:42:43.666490 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:43.666558 1450159 cri.go:89] found id: ""
	I1213 15:42:43.666579 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:42:43.666667 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:43.670640 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:42:43.670766 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:42:43.735772 1450159 cri.go:89] found id: ""
	I1213 15:42:43.735845 1450159 logs.go:282] 0 containers: []
	W1213 15:42:43.735880 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:42:43.735903 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:42:43.735999 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:42:43.778292 1450159 cri.go:89] found id: ""
	I1213 15:42:43.778367 1450159 logs.go:282] 0 containers: []
	W1213 15:42:43.778402 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:42:43.778433 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:42:43.778459 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:42:43.815065 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:42:43.823409 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:42:43.884110 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:42:43.884137 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:42:43.973081 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:42:43.973163 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:42:43.997220 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:42:43.997248 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:42:44.103785 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:42:44.103852 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:42:44.103880 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:44.157490 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:42:44.157568 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:44.248567 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:42:44.248645 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:44.322061 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:42:44.322152 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:46.884873 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:42:46.895563 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:42:46.895635 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:42:46.949528 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:46.949548 1450159 cri.go:89] found id: ""
	I1213 15:42:46.949556 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:42:46.949615 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:46.954027 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:42:46.954104 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:42:46.996402 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:46.996469 1450159 cri.go:89] found id: ""
	I1213 15:42:46.996492 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:42:46.996586 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:47.001026 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:42:47.001154 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:42:47.031366 1450159 cri.go:89] found id: ""
	I1213 15:42:47.031441 1450159 logs.go:282] 0 containers: []
	W1213 15:42:47.031464 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:42:47.031484 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:42:47.031617 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:42:47.064964 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:47.065037 1450159 cri.go:89] found id: ""
	I1213 15:42:47.065060 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:42:47.065154 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:47.069790 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:42:47.069919 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:42:47.101162 1450159 cri.go:89] found id: ""
	I1213 15:42:47.101234 1450159 logs.go:282] 0 containers: []
	W1213 15:42:47.101256 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:42:47.101274 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:42:47.101370 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:42:47.130359 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:47.130431 1450159 cri.go:89] found id: ""
	I1213 15:42:47.130454 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:42:47.130549 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:47.136422 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:42:47.136552 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:42:47.165889 1450159 cri.go:89] found id: ""
	I1213 15:42:47.165968 1450159 logs.go:282] 0 containers: []
	W1213 15:42:47.165990 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:42:47.166007 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:42:47.166093 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:42:47.197062 1450159 cri.go:89] found id: ""
	I1213 15:42:47.197141 1450159 logs.go:282] 0 containers: []
	W1213 15:42:47.197174 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:42:47.197215 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:42:47.197243 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:42:47.262606 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:42:47.262730 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:42:47.293145 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:42:47.293215 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:47.375191 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:42:47.375231 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:47.434402 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:42:47.434439 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:47.510629 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:42:47.510789 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:42:47.563345 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:42:47.563428 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:42:47.602448 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:42:47.602477 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:42:47.699021 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:42:47.699039 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:42:47.699052 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:50.263440 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:42:50.273703 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:42:50.273793 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:42:50.311799 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:50.311821 1450159 cri.go:89] found id: ""
	I1213 15:42:50.311830 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:42:50.311885 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:50.318846 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:42:50.318921 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:42:50.400569 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:50.400588 1450159 cri.go:89] found id: ""
	I1213 15:42:50.400596 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:42:50.400688 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:50.406505 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:42:50.406597 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:42:50.460777 1450159 cri.go:89] found id: ""
	I1213 15:42:50.460803 1450159 logs.go:282] 0 containers: []
	W1213 15:42:50.460813 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:42:50.460819 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:42:50.460897 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:42:50.507683 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:50.507711 1450159 cri.go:89] found id: ""
	I1213 15:42:50.507720 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:42:50.507806 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:50.512595 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:42:50.512688 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:42:50.555661 1450159 cri.go:89] found id: ""
	I1213 15:42:50.555687 1450159 logs.go:282] 0 containers: []
	W1213 15:42:50.555698 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:42:50.555705 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:42:50.555848 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:42:50.598988 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:50.599021 1450159 cri.go:89] found id: ""
	I1213 15:42:50.599030 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:42:50.599164 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:50.603512 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:42:50.603593 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:42:50.649640 1450159 cri.go:89] found id: ""
	I1213 15:42:50.649668 1450159 logs.go:282] 0 containers: []
	W1213 15:42:50.649677 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:42:50.649683 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:42:50.649788 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:42:50.686520 1450159 cri.go:89] found id: ""
	I1213 15:42:50.686557 1450159 logs.go:282] 0 containers: []
	W1213 15:42:50.686566 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:42:50.686607 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:42:50.686626 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:50.743229 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:42:50.743259 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:42:50.821306 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:42:50.821340 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:50.864854 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:42:50.864895 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:50.911373 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:42:50.911404 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:42:50.943717 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:42:50.943753 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:42:51.001038 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:42:51.001125 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:42:51.024940 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:42:51.025024 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:42:51.126838 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:42:51.126916 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:42:51.126946 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:53.666587 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:42:53.684282 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:42:53.684365 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:42:53.714268 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:53.714291 1450159 cri.go:89] found id: ""
	I1213 15:42:53.714300 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:42:53.714358 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:53.718826 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:42:53.718895 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:42:53.747461 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:53.747486 1450159 cri.go:89] found id: ""
	I1213 15:42:53.747494 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:42:53.747557 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:53.751540 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:42:53.751617 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:42:53.784047 1450159 cri.go:89] found id: ""
	I1213 15:42:53.784083 1450159 logs.go:282] 0 containers: []
	W1213 15:42:53.784102 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:42:53.784109 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:42:53.784175 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:42:53.829118 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:53.829139 1450159 cri.go:89] found id: ""
	I1213 15:42:53.829153 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:42:53.829213 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:53.836361 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:42:53.836441 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:42:53.872516 1450159 cri.go:89] found id: ""
	I1213 15:42:53.872543 1450159 logs.go:282] 0 containers: []
	W1213 15:42:53.872552 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:42:53.872558 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:42:53.872620 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:42:53.905560 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:53.905583 1450159 cri.go:89] found id: ""
	I1213 15:42:53.905591 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:42:53.905647 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:53.910599 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:42:53.910675 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:42:53.947756 1450159 cri.go:89] found id: ""
	I1213 15:42:53.947778 1450159 logs.go:282] 0 containers: []
	W1213 15:42:53.947787 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:42:53.947793 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:42:53.947868 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:42:53.994482 1450159 cri.go:89] found id: ""
	I1213 15:42:53.994503 1450159 logs.go:282] 0 containers: []
	W1213 15:42:53.994511 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:42:53.994526 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:42:53.994537 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:42:54.064217 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:42:54.064253 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:42:54.083870 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:42:54.083976 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:54.125176 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:42:54.125254 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:42:54.221339 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:42:54.221357 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:42:54.221369 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:54.290091 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:42:54.290167 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:54.421824 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:42:54.421857 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:54.523607 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:42:54.523642 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:42:54.569652 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:42:54.569691 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:42:57.134547 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:42:57.145722 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:42:57.145803 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:42:57.176670 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:57.176690 1450159 cri.go:89] found id: ""
	I1213 15:42:57.176698 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:42:57.176755 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:57.181845 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:42:57.181925 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:42:57.218991 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:57.219011 1450159 cri.go:89] found id: ""
	I1213 15:42:57.219019 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:42:57.219073 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:57.222839 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:42:57.222906 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:42:57.250580 1450159 cri.go:89] found id: ""
	I1213 15:42:57.250602 1450159 logs.go:282] 0 containers: []
	W1213 15:42:57.250610 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:42:57.250616 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:42:57.250675 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:42:57.290342 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:57.290366 1450159 cri.go:89] found id: ""
	I1213 15:42:57.290374 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:42:57.290429 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:57.295571 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:42:57.295650 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:42:57.330138 1450159 cri.go:89] found id: ""
	I1213 15:42:57.330157 1450159 logs.go:282] 0 containers: []
	W1213 15:42:57.330165 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:42:57.330172 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:42:57.330240 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:42:57.370823 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:57.370843 1450159 cri.go:89] found id: ""
	I1213 15:42:57.370852 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:42:57.370908 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:42:57.375687 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:42:57.375822 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:42:57.413632 1450159 cri.go:89] found id: ""
	I1213 15:42:57.413655 1450159 logs.go:282] 0 containers: []
	W1213 15:42:57.413664 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:42:57.413671 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:42:57.413736 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:42:57.453941 1450159 cri.go:89] found id: ""
	I1213 15:42:57.453964 1450159 logs.go:282] 0 containers: []
	W1213 15:42:57.453972 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:42:57.453986 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:42:57.453997 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:42:57.471616 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:42:57.471702 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:42:57.559128 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:42:57.559197 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:42:57.559226 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:42:57.625027 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:42:57.625111 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:42:57.701545 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:42:57.701629 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:42:57.771626 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:42:57.771718 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:42:57.837600 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:42:57.837663 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:42:57.892724 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:42:57.892782 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:42:57.949895 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:42:57.949971 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:43:00.499969 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:43:00.511355 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:43:00.511444 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:43:00.544906 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:00.544926 1450159 cri.go:89] found id: ""
	I1213 15:43:00.544934 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:43:00.545000 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:00.548944 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:43:00.549020 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:43:00.574835 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:00.574856 1450159 cri.go:89] found id: ""
	I1213 15:43:00.574864 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:43:00.574922 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:00.578925 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:43:00.579006 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:43:00.605563 1450159 cri.go:89] found id: ""
	I1213 15:43:00.605588 1450159 logs.go:282] 0 containers: []
	W1213 15:43:00.605600 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:43:00.605606 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:43:00.605668 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:43:00.634310 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:00.634335 1450159 cri.go:89] found id: ""
	I1213 15:43:00.634344 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:43:00.634406 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:00.638601 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:43:00.638677 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:43:00.666141 1450159 cri.go:89] found id: ""
	I1213 15:43:00.666165 1450159 logs.go:282] 0 containers: []
	W1213 15:43:00.666173 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:43:00.666179 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:43:00.666253 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:43:00.692993 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:00.693016 1450159 cri.go:89] found id: ""
	I1213 15:43:00.693027 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:43:00.693087 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:00.697113 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:43:00.697187 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:43:00.737200 1450159 cri.go:89] found id: ""
	I1213 15:43:00.737276 1450159 logs.go:282] 0 containers: []
	W1213 15:43:00.737296 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:43:00.737313 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:43:00.737405 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:43:00.772670 1450159 cri.go:89] found id: ""
	I1213 15:43:00.772696 1450159 logs.go:282] 0 containers: []
	W1213 15:43:00.772705 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:43:00.772722 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:43:00.772734 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:43:00.835927 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:43:00.835966 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:00.870635 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:43:00.870669 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:00.906422 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:43:00.906456 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:43:00.938443 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:43:00.938477 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:43:00.956243 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:43:00.956279 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:43:01.024212 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:43:01.024238 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:43:01.024254 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:01.064562 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:43:01.064595 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:01.127147 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:43:01.127455 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:43:03.661774 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:43:03.672150 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:43:03.672220 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:43:03.699741 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:03.699765 1450159 cri.go:89] found id: ""
	I1213 15:43:03.699773 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:43:03.699832 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:03.704878 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:43:03.704953 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:43:03.731299 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:03.731353 1450159 cri.go:89] found id: ""
	I1213 15:43:03.731362 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:43:03.731419 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:03.735432 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:43:03.735510 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:43:03.762660 1450159 cri.go:89] found id: ""
	I1213 15:43:03.762688 1450159 logs.go:282] 0 containers: []
	W1213 15:43:03.762697 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:43:03.762703 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:43:03.762765 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:43:03.789939 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:03.789964 1450159 cri.go:89] found id: ""
	I1213 15:43:03.789979 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:43:03.790040 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:03.794071 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:43:03.794152 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:43:03.820070 1450159 cri.go:89] found id: ""
	I1213 15:43:03.820095 1450159 logs.go:282] 0 containers: []
	W1213 15:43:03.820104 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:43:03.820110 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:43:03.820171 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:43:03.846224 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:03.846248 1450159 cri.go:89] found id: ""
	I1213 15:43:03.846256 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:43:03.846312 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:03.850402 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:43:03.850476 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:43:03.875965 1450159 cri.go:89] found id: ""
	I1213 15:43:03.875994 1450159 logs.go:282] 0 containers: []
	W1213 15:43:03.876002 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:43:03.876008 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:43:03.876066 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:43:03.902030 1450159 cri.go:89] found id: ""
	I1213 15:43:03.902056 1450159 logs.go:282] 0 containers: []
	W1213 15:43:03.902065 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:43:03.902078 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:43:03.902091 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:43:03.918419 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:43:03.918450 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:43:03.949353 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:43:03.949380 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:43:04.006880 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:43:04.006917 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:43:04.079472 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:43:04.079507 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:43:04.079522 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:04.119728 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:43:04.119764 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:04.154673 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:43:04.154709 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:04.195831 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:43:04.195871 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:04.230570 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:43:04.230621 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:43:06.760823 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:43:06.772588 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:43:06.772678 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:43:06.802534 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:06.802557 1450159 cri.go:89] found id: ""
	I1213 15:43:06.802565 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:43:06.802621 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:06.806227 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:43:06.806295 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:43:06.832040 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:06.832064 1450159 cri.go:89] found id: ""
	I1213 15:43:06.832073 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:43:06.832134 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:06.835950 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:43:06.836022 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:43:06.862447 1450159 cri.go:89] found id: ""
	I1213 15:43:06.862470 1450159 logs.go:282] 0 containers: []
	W1213 15:43:06.862478 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:43:06.862486 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:43:06.862546 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:43:06.889291 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:06.889314 1450159 cri.go:89] found id: ""
	I1213 15:43:06.889322 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:43:06.889377 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:06.892945 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:43:06.893016 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:43:06.919471 1450159 cri.go:89] found id: ""
	I1213 15:43:06.919546 1450159 logs.go:282] 0 containers: []
	W1213 15:43:06.919571 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:43:06.919584 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:43:06.919657 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:43:06.944714 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:06.944736 1450159 cri.go:89] found id: ""
	I1213 15:43:06.944746 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:43:06.944806 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:06.948882 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:43:06.948969 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:43:06.973564 1450159 cri.go:89] found id: ""
	I1213 15:43:06.973588 1450159 logs.go:282] 0 containers: []
	W1213 15:43:06.973597 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:43:06.973603 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:43:06.973685 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:43:06.998757 1450159 cri.go:89] found id: ""
	I1213 15:43:06.998780 1450159 logs.go:282] 0 containers: []
	W1213 15:43:06.998788 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:43:06.998804 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:43:06.998817 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:43:07.016798 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:43:07.016827 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:43:07.083069 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:43:07.083092 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:43:07.083106 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:07.123572 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:43:07.123607 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:43:07.154783 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:43:07.154816 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:43:07.185348 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:43:07.185379 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:43:07.247166 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:43:07.247204 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:07.281184 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:43:07.281214 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:07.319678 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:43:07.319714 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:09.853016 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:43:09.864734 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:43:09.864820 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:43:09.915801 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:09.915820 1450159 cri.go:89] found id: ""
	I1213 15:43:09.915829 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:43:09.915888 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:09.919807 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:43:09.919884 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:43:09.959633 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:09.959653 1450159 cri.go:89] found id: ""
	I1213 15:43:09.959662 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:43:09.959719 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:09.963854 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:43:09.963928 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:43:10.013152 1450159 cri.go:89] found id: ""
	I1213 15:43:10.013177 1450159 logs.go:282] 0 containers: []
	W1213 15:43:10.013187 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:43:10.013193 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:43:10.013263 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:43:10.068709 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:10.068729 1450159 cri.go:89] found id: ""
	I1213 15:43:10.068738 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:43:10.068799 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:10.083577 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:43:10.083665 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:43:10.191753 1450159 cri.go:89] found id: ""
	I1213 15:43:10.191789 1450159 logs.go:282] 0 containers: []
	W1213 15:43:10.191799 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:43:10.191805 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:43:10.191875 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:43:10.271288 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:10.271359 1450159 cri.go:89] found id: ""
	I1213 15:43:10.271369 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:43:10.271439 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:10.275694 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:43:10.275790 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:43:10.333637 1450159 cri.go:89] found id: ""
	I1213 15:43:10.333673 1450159 logs.go:282] 0 containers: []
	W1213 15:43:10.333682 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:43:10.333689 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:43:10.333758 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:43:10.365916 1450159 cri.go:89] found id: ""
	I1213 15:43:10.365949 1450159 logs.go:282] 0 containers: []
	W1213 15:43:10.365958 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:43:10.365975 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:43:10.365990 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:10.434402 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:43:10.434435 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:43:10.478826 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:43:10.478866 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:43:10.507243 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:43:10.507272 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:10.579020 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:43:10.579058 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:10.626670 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:43:10.626703 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:10.688017 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:43:10.688056 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:43:10.762866 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:43:10.762896 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:43:10.855626 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:43:10.855667 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:43:10.965262 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:43:13.465604 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:43:13.482237 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:43:13.482311 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:43:13.531407 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:13.531428 1450159 cri.go:89] found id: ""
	I1213 15:43:13.531436 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:43:13.531537 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:13.536706 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:43:13.536785 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:43:13.576653 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:13.576673 1450159 cri.go:89] found id: ""
	I1213 15:43:13.576681 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:43:13.576735 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:13.580858 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:43:13.580982 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:43:13.615302 1450159 cri.go:89] found id: ""
	I1213 15:43:13.615399 1450159 logs.go:282] 0 containers: []
	W1213 15:43:13.615434 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:43:13.615458 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:43:13.615559 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:43:13.655429 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:13.655504 1450159 cri.go:89] found id: ""
	I1213 15:43:13.655526 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:43:13.655613 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:13.660134 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:43:13.660254 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:43:13.713852 1450159 cri.go:89] found id: ""
	I1213 15:43:13.713923 1450159 logs.go:282] 0 containers: []
	W1213 15:43:13.713944 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:43:13.713964 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:43:13.714069 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:43:13.750599 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:13.750670 1450159 cri.go:89] found id: ""
	I1213 15:43:13.750704 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:43:13.750801 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:13.755902 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:43:13.756038 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:43:13.785648 1450159 cri.go:89] found id: ""
	I1213 15:43:13.785670 1450159 logs.go:282] 0 containers: []
	W1213 15:43:13.785679 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:43:13.785685 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:43:13.785751 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:43:13.817658 1450159 cri.go:89] found id: ""
	I1213 15:43:13.817680 1450159 logs.go:282] 0 containers: []
	W1213 15:43:13.817688 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:43:13.817702 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:43:13.817715 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:13.865056 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:43:13.865132 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:13.901835 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:43:13.901871 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:13.934605 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:43:13.934638 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:43:13.965211 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:43:13.965242 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:43:14.029950 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:43:14.029984 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:43:14.047954 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:43:14.047987 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:43:14.135608 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:43:14.135627 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:43:14.135643 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:43:14.165460 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:43:14.165494 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:16.704200 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:43:16.714402 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:43:16.714473 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:43:16.738333 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:16.738356 1450159 cri.go:89] found id: ""
	I1213 15:43:16.738364 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:43:16.738419 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:16.742150 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:43:16.742231 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:43:16.767034 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:16.767054 1450159 cri.go:89] found id: ""
	I1213 15:43:16.767062 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:43:16.767125 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:16.770945 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:43:16.771017 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:43:16.796067 1450159 cri.go:89] found id: ""
	I1213 15:43:16.796091 1450159 logs.go:282] 0 containers: []
	W1213 15:43:16.796099 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:43:16.796106 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:43:16.796163 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:43:16.821688 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:16.821769 1450159 cri.go:89] found id: ""
	I1213 15:43:16.821793 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:43:16.821876 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:16.825967 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:43:16.826035 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:43:16.849562 1450159 cri.go:89] found id: ""
	I1213 15:43:16.849586 1450159 logs.go:282] 0 containers: []
	W1213 15:43:16.849594 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:43:16.849600 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:43:16.849656 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:43:16.874701 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:16.874722 1450159 cri.go:89] found id: ""
	I1213 15:43:16.874731 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:43:16.874795 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:16.878524 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:43:16.878595 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:43:16.906646 1450159 cri.go:89] found id: ""
	I1213 15:43:16.906711 1450159 logs.go:282] 0 containers: []
	W1213 15:43:16.906733 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:43:16.906751 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:43:16.906839 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:43:16.931306 1450159 cri.go:89] found id: ""
	I1213 15:43:16.931366 1450159 logs.go:282] 0 containers: []
	W1213 15:43:16.931375 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:43:16.931388 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:43:16.931400 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:43:16.995946 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:43:16.995966 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:43:16.995979 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:17.029325 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:43:17.029361 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:17.059537 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:43:17.059568 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:17.097707 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:43:17.097749 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:17.138632 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:43:17.138670 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:43:17.171039 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:43:17.171077 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:43:17.199953 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:43:17.199983 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:43:17.260380 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:43:17.260416 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:43:19.777736 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:43:19.788650 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:43:19.788723 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:43:19.814035 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:19.814059 1450159 cri.go:89] found id: ""
	I1213 15:43:19.814068 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:43:19.814133 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:19.818086 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:43:19.818159 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:43:19.844427 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:19.844452 1450159 cri.go:89] found id: ""
	I1213 15:43:19.844460 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:43:19.844521 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:19.848407 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:43:19.848500 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:43:19.872917 1450159 cri.go:89] found id: ""
	I1213 15:43:19.872943 1450159 logs.go:282] 0 containers: []
	W1213 15:43:19.872952 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:43:19.872958 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:43:19.873018 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:43:19.903699 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:19.903720 1450159 cri.go:89] found id: ""
	I1213 15:43:19.903729 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:43:19.903787 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:19.907679 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:43:19.907763 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:43:19.932746 1450159 cri.go:89] found id: ""
	I1213 15:43:19.932771 1450159 logs.go:282] 0 containers: []
	W1213 15:43:19.932780 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:43:19.932786 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:43:19.932898 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:43:19.961684 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:19.961709 1450159 cri.go:89] found id: ""
	I1213 15:43:19.961718 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:43:19.961796 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:19.965757 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:43:19.965848 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:43:19.991374 1450159 cri.go:89] found id: ""
	I1213 15:43:19.991397 1450159 logs.go:282] 0 containers: []
	W1213 15:43:19.991406 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:43:19.991412 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:43:19.991497 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:43:20.020815 1450159 cri.go:89] found id: ""
	I1213 15:43:20.020858 1450159 logs.go:282] 0 containers: []
	W1213 15:43:20.020867 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:43:20.020900 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:43:20.020938 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:43:20.095400 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:43:20.095429 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:43:20.095445 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:20.147213 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:43:20.147287 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:20.182634 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:43:20.182666 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:43:20.216297 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:43:20.216341 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:43:20.281493 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:43:20.281529 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:20.320530 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:43:20.320563 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:20.356845 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:43:20.356882 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:43:20.387117 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:43:20.387148 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:43:22.904272 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:43:22.914396 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:43:22.914473 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:43:22.940205 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:22.940227 1450159 cri.go:89] found id: ""
	I1213 15:43:22.940235 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:43:22.940301 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:22.944232 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:43:22.944338 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:43:22.969724 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:22.969799 1450159 cri.go:89] found id: ""
	I1213 15:43:22.969820 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:43:22.969909 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:22.973924 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:43:22.974007 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:43:22.999730 1450159 cri.go:89] found id: ""
	I1213 15:43:22.999761 1450159 logs.go:282] 0 containers: []
	W1213 15:43:22.999770 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:43:22.999776 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:43:22.999838 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:43:23.027774 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:23.027800 1450159 cri.go:89] found id: ""
	I1213 15:43:23.027809 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:43:23.027868 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:23.031866 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:43:23.031991 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:43:23.057578 1450159 cri.go:89] found id: ""
	I1213 15:43:23.057655 1450159 logs.go:282] 0 containers: []
	W1213 15:43:23.057672 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:43:23.057680 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:43:23.057747 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:43:23.091104 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:23.091131 1450159 cri.go:89] found id: ""
	I1213 15:43:23.091140 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:43:23.091211 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:23.096970 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:43:23.097064 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:43:23.126446 1450159 cri.go:89] found id: ""
	I1213 15:43:23.126491 1450159 logs.go:282] 0 containers: []
	W1213 15:43:23.126500 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:43:23.126507 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:43:23.126579 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:43:23.156645 1450159 cri.go:89] found id: ""
	I1213 15:43:23.156670 1450159 logs.go:282] 0 containers: []
	W1213 15:43:23.156678 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:43:23.156692 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:43:23.156705 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:43:23.185890 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:43:23.185920 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:43:23.202606 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:43:23.202638 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:23.237693 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:43:23.237731 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:23.269205 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:43:23.269237 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:43:23.327347 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:43:23.327383 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:43:23.396222 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:43:23.396243 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:43:23.396256 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:23.433733 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:43:23.433764 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:23.468411 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:43:23.468444 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:43:26.000016 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:43:26.012872 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:43:26.012951 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:43:26.041786 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:26.041810 1450159 cri.go:89] found id: ""
	I1213 15:43:26.041818 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:43:26.041880 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:26.045872 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:43:26.045968 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:43:26.075128 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:26.075154 1450159 cri.go:89] found id: ""
	I1213 15:43:26.075163 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:43:26.075224 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:26.083110 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:43:26.083290 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:43:26.113094 1450159 cri.go:89] found id: ""
	I1213 15:43:26.113118 1450159 logs.go:282] 0 containers: []
	W1213 15:43:26.113136 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:43:26.113143 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:43:26.113217 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:43:26.142735 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:26.142757 1450159 cri.go:89] found id: ""
	I1213 15:43:26.142774 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:43:26.142834 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:26.147046 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:43:26.147133 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:43:26.183037 1450159 cri.go:89] found id: ""
	I1213 15:43:26.183064 1450159 logs.go:282] 0 containers: []
	W1213 15:43:26.183083 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:43:26.183089 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:43:26.183160 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:43:26.216517 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:26.216580 1450159 cri.go:89] found id: ""
	I1213 15:43:26.216600 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:43:26.216659 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:26.220616 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:43:26.220708 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:43:26.251482 1450159 cri.go:89] found id: ""
	I1213 15:43:26.251563 1450159 logs.go:282] 0 containers: []
	W1213 15:43:26.251586 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:43:26.251600 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:43:26.251678 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:43:26.278225 1450159 cri.go:89] found id: ""
	I1213 15:43:26.278257 1450159 logs.go:282] 0 containers: []
	W1213 15:43:26.278266 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:43:26.278296 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:43:26.278354 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:43:26.294998 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:43:26.295033 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:43:26.361861 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:43:26.361881 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:43:26.361895 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:26.400523 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:43:26.400554 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:26.434857 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:43:26.434889 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:26.477424 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:43:26.477463 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:26.510513 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:43:26.510545 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:43:26.548307 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:43:26.548349 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:43:26.606950 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:43:26.606986 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:43:29.152470 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:43:29.163790 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:43:29.163865 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:43:29.191009 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:29.191033 1450159 cri.go:89] found id: ""
	I1213 15:43:29.191041 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:43:29.191100 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:29.195821 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:43:29.195898 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:43:29.221865 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:29.221900 1450159 cri.go:89] found id: ""
	I1213 15:43:29.221909 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:43:29.221979 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:29.226460 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:43:29.226554 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:43:29.252228 1450159 cri.go:89] found id: ""
	I1213 15:43:29.252278 1450159 logs.go:282] 0 containers: []
	W1213 15:43:29.252288 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:43:29.252295 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:43:29.252367 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:43:29.282618 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:29.282649 1450159 cri.go:89] found id: ""
	I1213 15:43:29.282657 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:43:29.282729 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:29.286739 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:43:29.286824 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:43:29.318341 1450159 cri.go:89] found id: ""
	I1213 15:43:29.318367 1450159 logs.go:282] 0 containers: []
	W1213 15:43:29.318376 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:43:29.318382 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:43:29.318451 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:43:29.343860 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:29.343881 1450159 cri.go:89] found id: ""
	I1213 15:43:29.343889 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:43:29.343949 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:29.348124 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:43:29.348205 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:43:29.374285 1450159 cri.go:89] found id: ""
	I1213 15:43:29.374310 1450159 logs.go:282] 0 containers: []
	W1213 15:43:29.374319 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:43:29.374326 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:43:29.374388 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:43:29.399759 1450159 cri.go:89] found id: ""
	I1213 15:43:29.399783 1450159 logs.go:282] 0 containers: []
	W1213 15:43:29.399792 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:43:29.399807 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:43:29.399819 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:43:29.462027 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:43:29.462063 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:43:29.479458 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:43:29.479489 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:29.513775 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:43:29.513811 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:29.555720 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:43:29.555754 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:29.598838 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:43:29.598866 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:43:29.631764 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:43:29.631794 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:43:29.695242 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:43:29.695270 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:43:29.695286 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:29.734064 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:43:29.734098 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:43:32.264215 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:43:32.275320 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:43:32.275397 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:43:32.302066 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:32.302139 1450159 cri.go:89] found id: ""
	I1213 15:43:32.302161 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:43:32.302249 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:32.306425 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:43:32.306501 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:43:32.333243 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:32.333267 1450159 cri.go:89] found id: ""
	I1213 15:43:32.333276 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:43:32.333342 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:32.337394 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:43:32.337478 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:43:32.369211 1450159 cri.go:89] found id: ""
	I1213 15:43:32.369233 1450159 logs.go:282] 0 containers: []
	W1213 15:43:32.369242 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:43:32.369249 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:43:32.369312 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:43:32.395692 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:32.395724 1450159 cri.go:89] found id: ""
	I1213 15:43:32.395734 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:43:32.395794 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:32.399784 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:43:32.399858 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:43:32.429280 1450159 cri.go:89] found id: ""
	I1213 15:43:32.429347 1450159 logs.go:282] 0 containers: []
	W1213 15:43:32.429363 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:43:32.429371 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:43:32.429435 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:43:32.455256 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:32.455278 1450159 cri.go:89] found id: ""
	I1213 15:43:32.455286 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:43:32.455369 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:32.460339 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:43:32.460449 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:43:32.487359 1450159 cri.go:89] found id: ""
	I1213 15:43:32.487383 1450159 logs.go:282] 0 containers: []
	W1213 15:43:32.487392 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:43:32.487398 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:43:32.487465 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:43:32.512189 1450159 cri.go:89] found id: ""
	I1213 15:43:32.512215 1450159 logs.go:282] 0 containers: []
	W1213 15:43:32.512224 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:43:32.512238 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:43:32.512250 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:43:32.570708 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:43:32.570749 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:43:32.587402 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:43:32.587431 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:32.622124 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:43:32.622156 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:43:32.657153 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:43:32.657257 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:43:32.709590 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:43:32.709668 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:43:32.803633 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:43:32.803652 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:43:32.803665 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:32.861678 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:43:32.861763 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:32.942110 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:43:32.942188 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:35.512107 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:43:35.523505 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:43:35.523569 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:43:35.553472 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:35.553542 1450159 cri.go:89] found id: ""
	I1213 15:43:35.553578 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:43:35.553676 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:35.558220 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:43:35.558285 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:43:35.594418 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:35.594486 1450159 cri.go:89] found id: ""
	I1213 15:43:35.594509 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:43:35.594602 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:35.599001 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:43:35.599130 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:43:35.628769 1450159 cri.go:89] found id: ""
	I1213 15:43:35.628843 1450159 logs.go:282] 0 containers: []
	W1213 15:43:35.628866 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:43:35.628884 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:43:35.628977 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:43:35.656876 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:35.656948 1450159 cri.go:89] found id: ""
	I1213 15:43:35.656971 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:43:35.657064 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:35.661247 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:43:35.661395 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:43:35.703164 1450159 cri.go:89] found id: ""
	I1213 15:43:35.703247 1450159 logs.go:282] 0 containers: []
	W1213 15:43:35.703282 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:43:35.703351 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:43:35.703464 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:43:35.745710 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:35.745788 1450159 cri.go:89] found id: ""
	I1213 15:43:35.745820 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:43:35.745924 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:35.750848 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:43:35.750990 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:43:35.781776 1450159 cri.go:89] found id: ""
	I1213 15:43:35.781854 1450159 logs.go:282] 0 containers: []
	W1213 15:43:35.781877 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:43:35.781895 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:43:35.781985 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:43:35.817555 1450159 cri.go:89] found id: ""
	I1213 15:43:35.817633 1450159 logs.go:282] 0 containers: []
	W1213 15:43:35.817655 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:43:35.817702 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:43:35.817739 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:43:35.890905 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:43:35.891034 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:43:35.908021 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:43:35.908117 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:43:36.014401 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:43:36.014478 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:43:36.014508 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:36.073185 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:43:36.073223 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:36.165390 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:43:36.165422 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:43:36.210444 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:43:36.210523 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:43:36.247931 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:43:36.248009 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:36.299059 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:43:36.299135 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:38.844355 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:43:38.854942 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:43:38.855010 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:43:38.881620 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:38.881643 1450159 cri.go:89] found id: ""
	I1213 15:43:38.881651 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:43:38.881710 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:38.885617 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:43:38.885700 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:43:38.914086 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:38.914109 1450159 cri.go:89] found id: ""
	I1213 15:43:38.914117 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:43:38.914175 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:38.918042 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:43:38.918114 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:43:38.941966 1450159 cri.go:89] found id: ""
	I1213 15:43:38.941993 1450159 logs.go:282] 0 containers: []
	W1213 15:43:38.942001 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:43:38.942007 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:43:38.942064 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:43:38.968725 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:38.968749 1450159 cri.go:89] found id: ""
	I1213 15:43:38.968758 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:43:38.968815 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:38.972600 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:43:38.972672 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:43:38.996870 1450159 cri.go:89] found id: ""
	I1213 15:43:38.996893 1450159 logs.go:282] 0 containers: []
	W1213 15:43:38.996903 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:43:38.996909 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:43:38.996968 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:43:39.028066 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:39.028090 1450159 cri.go:89] found id: ""
	I1213 15:43:39.028099 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:43:39.028163 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:39.031966 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:43:39.032044 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:43:39.057064 1450159 cri.go:89] found id: ""
	I1213 15:43:39.057091 1450159 logs.go:282] 0 containers: []
	W1213 15:43:39.057100 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:43:39.057107 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:43:39.057183 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:43:39.084524 1450159 cri.go:89] found id: ""
	I1213 15:43:39.084550 1450159 logs.go:282] 0 containers: []
	W1213 15:43:39.084559 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:43:39.084573 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:43:39.084586 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:39.136814 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:43:39.136851 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:43:39.166797 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:43:39.166827 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:43:39.224945 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:43:39.224980 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:43:39.292654 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:43:39.292680 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:43:39.292697 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:39.331332 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:43:39.331368 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:39.380006 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:43:39.380035 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:39.410874 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:43:39.410906 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:43:39.439886 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:43:39.439921 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:43:41.957125 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:43:41.967843 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:43:41.967920 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:43:41.993716 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:41.993740 1450159 cri.go:89] found id: ""
	I1213 15:43:41.993750 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:43:41.993807 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:41.997591 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:43:41.997667 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:43:42.035432 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:42.035457 1450159 cri.go:89] found id: ""
	I1213 15:43:42.035467 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:43:42.035527 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:42.039609 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:43:42.039692 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:43:42.070779 1450159 cri.go:89] found id: ""
	I1213 15:43:42.070811 1450159 logs.go:282] 0 containers: []
	W1213 15:43:42.070823 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:43:42.070830 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:43:42.070904 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:43:42.114435 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:42.114460 1450159 cri.go:89] found id: ""
	I1213 15:43:42.114470 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:43:42.114555 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:42.120214 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:43:42.120311 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:43:42.165772 1450159 cri.go:89] found id: ""
	I1213 15:43:42.166010 1450159 logs.go:282] 0 containers: []
	W1213 15:43:42.166024 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:43:42.166034 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:43:42.166112 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:43:42.214033 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:42.214078 1450159 cri.go:89] found id: ""
	I1213 15:43:42.214094 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:43:42.214190 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:42.219608 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:43:42.219782 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:43:42.267666 1450159 cri.go:89] found id: ""
	I1213 15:43:42.267766 1450159 logs.go:282] 0 containers: []
	W1213 15:43:42.267803 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:43:42.267854 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:43:42.267996 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:43:42.307698 1450159 cri.go:89] found id: ""
	I1213 15:43:42.307728 1450159 logs.go:282] 0 containers: []
	W1213 15:43:42.307737 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:43:42.307752 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:43:42.307768 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:43:42.325685 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:43:42.325782 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:42.377308 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:43:42.377346 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:42.411259 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:43:42.411291 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:43:42.441343 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:43:42.441376 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:43:42.501210 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:43:42.501248 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:43:42.571555 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:43:42.571576 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:43:42.571590 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:42.603159 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:43:42.603190 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:42.663432 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:43:42.663476 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:43:45.201426 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:43:45.216583 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:43:45.216676 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:43:45.264639 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:45.264666 1450159 cri.go:89] found id: ""
	I1213 15:43:45.264675 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:43:45.264748 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:45.273935 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:43:45.274045 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:43:45.310720 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:45.310744 1450159 cri.go:89] found id: ""
	I1213 15:43:45.310752 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:43:45.310812 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:45.315227 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:43:45.315336 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:43:45.347710 1450159 cri.go:89] found id: ""
	I1213 15:43:45.347734 1450159 logs.go:282] 0 containers: []
	W1213 15:43:45.347743 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:43:45.347749 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:43:45.347824 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:43:45.373585 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:45.373609 1450159 cri.go:89] found id: ""
	I1213 15:43:45.373617 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:43:45.373674 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:45.377856 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:43:45.377935 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:43:45.404976 1450159 cri.go:89] found id: ""
	I1213 15:43:45.405001 1450159 logs.go:282] 0 containers: []
	W1213 15:43:45.405010 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:43:45.405016 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:43:45.405080 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:43:45.430559 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:45.430582 1450159 cri.go:89] found id: ""
	I1213 15:43:45.430591 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:43:45.430649 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:45.434589 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:43:45.434668 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:43:45.461462 1450159 cri.go:89] found id: ""
	I1213 15:43:45.461491 1450159 logs.go:282] 0 containers: []
	W1213 15:43:45.461512 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:43:45.461519 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:43:45.461582 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:43:45.486687 1450159 cri.go:89] found id: ""
	I1213 15:43:45.486710 1450159 logs.go:282] 0 containers: []
	W1213 15:43:45.486718 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:43:45.486732 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:43:45.486744 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:43:45.550715 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:43:45.550737 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:43:45.550750 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:45.585056 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:43:45.585091 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:45.618544 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:43:45.618576 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:45.653864 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:43:45.653897 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:45.684109 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:43:45.684141 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:43:45.714426 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:43:45.714477 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:43:45.776556 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:43:45.776591 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:43:45.792926 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:43:45.792957 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:43:48.323760 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:43:48.333925 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:43:48.333993 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:43:48.369750 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:48.369773 1450159 cri.go:89] found id: ""
	I1213 15:43:48.369781 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:43:48.369839 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:48.373617 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:43:48.373699 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:43:48.399847 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:48.399871 1450159 cri.go:89] found id: ""
	I1213 15:43:48.399880 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:43:48.399939 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:48.403913 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:43:48.403996 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:43:48.429310 1450159 cri.go:89] found id: ""
	I1213 15:43:48.429333 1450159 logs.go:282] 0 containers: []
	W1213 15:43:48.429342 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:43:48.429349 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:43:48.429409 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:43:48.456670 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:48.456691 1450159 cri.go:89] found id: ""
	I1213 15:43:48.456706 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:43:48.456764 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:48.460846 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:43:48.460948 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:43:48.486654 1450159 cri.go:89] found id: ""
	I1213 15:43:48.486679 1450159 logs.go:282] 0 containers: []
	W1213 15:43:48.486693 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:43:48.486700 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:43:48.486808 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:43:48.512604 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:48.512626 1450159 cri.go:89] found id: ""
	I1213 15:43:48.512635 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:43:48.512690 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:48.516578 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:43:48.516651 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:43:48.545797 1450159 cri.go:89] found id: ""
	I1213 15:43:48.545821 1450159 logs.go:282] 0 containers: []
	W1213 15:43:48.545829 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:43:48.545835 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:43:48.545906 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:43:48.571032 1450159 cri.go:89] found id: ""
	I1213 15:43:48.571055 1450159 logs.go:282] 0 containers: []
	W1213 15:43:48.571064 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:43:48.571079 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:43:48.571091 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:43:48.600007 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:43:48.600039 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:43:48.616342 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:43:48.616371 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:43:48.683784 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:43:48.683806 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:43:48.683821 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:48.716133 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:43:48.716166 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:48.755974 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:43:48.756011 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:43:48.786831 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:43:48.786865 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:43:48.848678 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:43:48.848755 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:48.883832 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:43:48.883904 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:51.414640 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:43:51.424987 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:43:51.425065 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:43:51.450924 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:51.450948 1450159 cri.go:89] found id: ""
	I1213 15:43:51.450956 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:43:51.451018 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:51.454777 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:43:51.454852 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:43:51.480293 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:51.480315 1450159 cri.go:89] found id: ""
	I1213 15:43:51.480323 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:43:51.480380 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:51.484034 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:43:51.484108 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:43:51.509175 1450159 cri.go:89] found id: ""
	I1213 15:43:51.509200 1450159 logs.go:282] 0 containers: []
	W1213 15:43:51.509209 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:43:51.509215 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:43:51.509281 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:43:51.538115 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:51.538137 1450159 cri.go:89] found id: ""
	I1213 15:43:51.538145 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:43:51.538202 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:51.542234 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:43:51.542318 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:43:51.567389 1450159 cri.go:89] found id: ""
	I1213 15:43:51.567414 1450159 logs.go:282] 0 containers: []
	W1213 15:43:51.567423 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:43:51.567442 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:43:51.567517 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:43:51.592220 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:51.592298 1450159 cri.go:89] found id: ""
	I1213 15:43:51.592322 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:43:51.592402 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:51.596170 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:43:51.596241 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:43:51.620354 1450159 cri.go:89] found id: ""
	I1213 15:43:51.620381 1450159 logs.go:282] 0 containers: []
	W1213 15:43:51.620400 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:43:51.620406 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:43:51.620499 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:43:51.647964 1450159 cri.go:89] found id: ""
	I1213 15:43:51.647988 1450159 logs.go:282] 0 containers: []
	W1213 15:43:51.647996 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:43:51.648012 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:43:51.648025 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:51.683220 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:43:51.683255 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:51.715245 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:43:51.715275 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:43:51.757961 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:43:51.757989 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:43:51.823123 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:43:51.823147 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:43:51.823161 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:51.868699 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:43:51.868731 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:43:51.899741 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:43:51.899775 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:43:51.964837 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:43:51.964881 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:43:51.983014 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:43:51.983048 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:54.535969 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:43:54.546367 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:43:54.546442 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:43:54.572966 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:54.572987 1450159 cri.go:89] found id: ""
	I1213 15:43:54.572995 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:43:54.573055 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:54.576880 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:43:54.576951 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:43:54.601644 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:54.601667 1450159 cri.go:89] found id: ""
	I1213 15:43:54.601676 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:43:54.601735 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:54.605449 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:43:54.605522 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:43:54.630747 1450159 cri.go:89] found id: ""
	I1213 15:43:54.630771 1450159 logs.go:282] 0 containers: []
	W1213 15:43:54.630780 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:43:54.630786 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:43:54.630850 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:43:54.656486 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:54.656508 1450159 cri.go:89] found id: ""
	I1213 15:43:54.656517 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:43:54.656574 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:54.660501 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:43:54.660576 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:43:54.684795 1450159 cri.go:89] found id: ""
	I1213 15:43:54.684854 1450159 logs.go:282] 0 containers: []
	W1213 15:43:54.684869 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:43:54.684876 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:43:54.684933 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:43:54.710583 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:54.710605 1450159 cri.go:89] found id: ""
	I1213 15:43:54.710613 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:43:54.710672 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:54.714488 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:43:54.714565 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:43:54.740033 1450159 cri.go:89] found id: ""
	I1213 15:43:54.740059 1450159 logs.go:282] 0 containers: []
	W1213 15:43:54.740067 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:43:54.740075 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:43:54.740156 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:43:54.765055 1450159 cri.go:89] found id: ""
	I1213 15:43:54.765079 1450159 logs.go:282] 0 containers: []
	W1213 15:43:54.765088 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:43:54.765104 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:43:54.765117 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:43:54.829027 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:43:54.829055 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:43:54.829069 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:54.870259 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:43:54.870292 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:54.910570 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:43:54.910604 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:43:54.976980 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:43:54.977021 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:43:54.996436 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:43:54.996465 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:55.035218 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:43:55.035252 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:55.073519 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:43:55.073549 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:43:55.103951 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:43:55.104006 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:43:57.642183 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:43:57.653084 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:43:57.653160 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:43:57.680055 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:57.680080 1450159 cri.go:89] found id: ""
	I1213 15:43:57.680089 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:43:57.680167 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:57.684133 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:43:57.684238 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:43:57.713336 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:57.713356 1450159 cri.go:89] found id: ""
	I1213 15:43:57.713364 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:43:57.713472 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:57.717354 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:43:57.717448 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:43:57.745599 1450159 cri.go:89] found id: ""
	I1213 15:43:57.745621 1450159 logs.go:282] 0 containers: []
	W1213 15:43:57.745629 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:43:57.745636 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:43:57.745697 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:43:57.775728 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:57.775751 1450159 cri.go:89] found id: ""
	I1213 15:43:57.775759 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:43:57.775820 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:57.779852 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:43:57.779928 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:43:57.805933 1450159 cri.go:89] found id: ""
	I1213 15:43:57.806001 1450159 logs.go:282] 0 containers: []
	W1213 15:43:57.806023 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:43:57.806042 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:43:57.806136 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:43:57.836534 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:57.836600 1450159 cri.go:89] found id: ""
	I1213 15:43:57.836621 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:43:57.836713 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:43:57.841008 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:43:57.841125 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:43:57.877894 1450159 cri.go:89] found id: ""
	I1213 15:43:57.877963 1450159 logs.go:282] 0 containers: []
	W1213 15:43:57.877986 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:43:57.878005 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:43:57.878093 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:43:57.906587 1450159 cri.go:89] found id: ""
	I1213 15:43:57.906665 1450159 logs.go:282] 0 containers: []
	W1213 15:43:57.906683 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:43:57.906697 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:43:57.906710 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:43:57.923247 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:43:57.923285 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:43:57.958512 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:43:57.958542 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:43:57.996331 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:43:57.996361 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:43:58.026550 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:43:58.026587 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:43:58.097130 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:43:58.097169 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:43:58.169672 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:43:58.169694 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:43:58.169708 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:43:58.215130 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:43:58.215162 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:43:58.247906 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:43:58.247940 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:44:00.777053 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:44:00.787494 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:44:00.787579 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:44:00.813249 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:00.813273 1450159 cri.go:89] found id: ""
	I1213 15:44:00.813282 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:44:00.813339 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:00.817304 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:44:00.817379 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:44:00.846239 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:00.846259 1450159 cri.go:89] found id: ""
	I1213 15:44:00.846267 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:44:00.846322 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:00.850011 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:44:00.850118 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:44:00.874279 1450159 cri.go:89] found id: ""
	I1213 15:44:00.874348 1450159 logs.go:282] 0 containers: []
	W1213 15:44:00.874373 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:44:00.874392 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:44:00.874479 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:44:00.903861 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:00.903881 1450159 cri.go:89] found id: ""
	I1213 15:44:00.903890 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:44:00.903966 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:00.907740 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:44:00.907838 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:44:00.933219 1450159 cri.go:89] found id: ""
	I1213 15:44:00.933245 1450159 logs.go:282] 0 containers: []
	W1213 15:44:00.933253 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:44:00.933260 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:44:00.933322 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:44:00.958603 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:00.958626 1450159 cri.go:89] found id: ""
	I1213 15:44:00.958635 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:44:00.958698 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:00.962535 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:44:00.962608 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:44:00.989020 1450159 cri.go:89] found id: ""
	I1213 15:44:00.989048 1450159 logs.go:282] 0 containers: []
	W1213 15:44:00.989062 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:44:00.989068 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:44:00.989129 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:44:01.015936 1450159 cri.go:89] found id: ""
	I1213 15:44:01.015962 1450159 logs.go:282] 0 containers: []
	W1213 15:44:01.015971 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:44:01.016004 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:44:01.016020 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:44:01.032533 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:44:01.032565 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:01.066481 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:44:01.066514 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:01.118274 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:44:01.118310 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:01.157907 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:44:01.157938 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:44:01.219369 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:44:01.219446 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:44:01.287804 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:44:01.287826 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:44:01.287840 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:01.335076 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:44:01.335110 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:44:01.366490 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:44:01.366529 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:44:03.895995 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:44:03.906543 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:44:03.906616 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:44:03.936127 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:03.936151 1450159 cri.go:89] found id: ""
	I1213 15:44:03.936159 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:44:03.936222 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:03.939993 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:44:03.940082 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:44:03.966624 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:03.966644 1450159 cri.go:89] found id: ""
	I1213 15:44:03.966652 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:44:03.966710 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:03.971179 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:44:03.971303 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:44:03.998095 1450159 cri.go:89] found id: ""
	I1213 15:44:03.998122 1450159 logs.go:282] 0 containers: []
	W1213 15:44:03.998131 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:44:03.998138 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:44:03.998206 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:44:04.028554 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:04.028629 1450159 cri.go:89] found id: ""
	I1213 15:44:04.028652 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:44:04.028733 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:04.033165 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:44:04.033247 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:44:04.064109 1450159 cri.go:89] found id: ""
	I1213 15:44:04.064136 1450159 logs.go:282] 0 containers: []
	W1213 15:44:04.064145 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:44:04.064151 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:44:04.064212 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:44:04.101862 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:04.101885 1450159 cri.go:89] found id: ""
	I1213 15:44:04.101894 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:44:04.101953 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:04.107727 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:44:04.107806 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:44:04.141153 1450159 cri.go:89] found id: ""
	I1213 15:44:04.141179 1450159 logs.go:282] 0 containers: []
	W1213 15:44:04.141188 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:44:04.141194 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:44:04.141252 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:44:04.172029 1450159 cri.go:89] found id: ""
	I1213 15:44:04.172103 1450159 logs.go:282] 0 containers: []
	W1213 15:44:04.172129 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:44:04.172151 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:44:04.172176 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:44:04.230956 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:44:04.230992 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:44:04.300876 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:44:04.300897 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:44:04.300910 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:04.340613 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:44:04.340645 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:04.378066 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:44:04.378095 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:44:04.407277 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:44:04.407381 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:44:04.423742 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:44:04.423772 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:04.458433 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:44:04.458469 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:04.492853 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:44:04.492886 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:44:07.023450 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:44:07.035280 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:44:07.035380 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:44:07.069575 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:07.069604 1450159 cri.go:89] found id: ""
	I1213 15:44:07.069612 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:44:07.069669 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:07.073597 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:44:07.073675 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:44:07.112513 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:07.112536 1450159 cri.go:89] found id: ""
	I1213 15:44:07.112545 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:44:07.112602 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:07.118416 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:44:07.118518 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:44:07.161332 1450159 cri.go:89] found id: ""
	I1213 15:44:07.161366 1450159 logs.go:282] 0 containers: []
	W1213 15:44:07.161374 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:44:07.161381 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:44:07.161440 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:44:07.186832 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:07.186853 1450159 cri.go:89] found id: ""
	I1213 15:44:07.186861 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:44:07.186920 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:07.190658 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:44:07.190736 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:44:07.215159 1450159 cri.go:89] found id: ""
	I1213 15:44:07.215184 1450159 logs.go:282] 0 containers: []
	W1213 15:44:07.215192 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:44:07.215198 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:44:07.215256 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:44:07.241179 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:07.241204 1450159 cri.go:89] found id: ""
	I1213 15:44:07.241212 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:44:07.241302 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:07.245327 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:44:07.245420 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:44:07.270617 1450159 cri.go:89] found id: ""
	I1213 15:44:07.270641 1450159 logs.go:282] 0 containers: []
	W1213 15:44:07.270649 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:44:07.270656 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:44:07.270743 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:44:07.296849 1450159 cri.go:89] found id: ""
	I1213 15:44:07.296925 1450159 logs.go:282] 0 containers: []
	W1213 15:44:07.296941 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:44:07.296956 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:44:07.296968 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:44:07.354605 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:44:07.354664 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:44:07.371975 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:44:07.372004 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:44:07.435569 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:44:07.435589 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:44:07.435603 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:07.471902 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:44:07.471935 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:07.502352 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:44:07.502383 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:07.545725 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:44:07.545759 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:07.577425 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:44:07.577458 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:44:07.607034 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:44:07.607069 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:44:10.138740 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:44:10.149105 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:44:10.149174 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:44:10.174724 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:10.174748 1450159 cri.go:89] found id: ""
	I1213 15:44:10.174756 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:44:10.174821 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:10.178832 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:44:10.178904 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:44:10.203423 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:10.203447 1450159 cri.go:89] found id: ""
	I1213 15:44:10.203456 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:44:10.203520 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:10.207238 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:44:10.207346 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:44:10.231990 1450159 cri.go:89] found id: ""
	I1213 15:44:10.232067 1450159 logs.go:282] 0 containers: []
	W1213 15:44:10.232083 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:44:10.232090 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:44:10.232149 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:44:10.257122 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:10.257151 1450159 cri.go:89] found id: ""
	I1213 15:44:10.257160 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:44:10.257219 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:10.261201 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:44:10.261274 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:44:10.285486 1450159 cri.go:89] found id: ""
	I1213 15:44:10.285510 1450159 logs.go:282] 0 containers: []
	W1213 15:44:10.285518 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:44:10.285525 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:44:10.285582 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:44:10.310984 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:10.311006 1450159 cri.go:89] found id: ""
	I1213 15:44:10.311013 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:44:10.311068 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:10.314888 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:44:10.314966 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:44:10.341334 1450159 cri.go:89] found id: ""
	I1213 15:44:10.341360 1450159 logs.go:282] 0 containers: []
	W1213 15:44:10.341369 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:44:10.341375 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:44:10.341433 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:44:10.373385 1450159 cri.go:89] found id: ""
	I1213 15:44:10.373407 1450159 logs.go:282] 0 containers: []
	W1213 15:44:10.373415 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:44:10.373431 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:44:10.373445 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:10.406230 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:44:10.406262 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:10.440129 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:44:10.440159 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:10.475802 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:44:10.475833 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:44:10.535125 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:44:10.535156 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:44:10.551915 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:44:10.551943 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:44:10.617432 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:44:10.617452 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:44:10.617465 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:10.649704 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:44:10.649734 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:44:10.681220 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:44:10.681255 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:44:13.211462 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:44:13.222349 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:44:13.222414 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:44:13.265726 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:13.265745 1450159 cri.go:89] found id: ""
	I1213 15:44:13.265753 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:44:13.265809 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:13.270339 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:44:13.270405 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:44:13.301732 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:13.301751 1450159 cri.go:89] found id: ""
	I1213 15:44:13.301759 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:44:13.301814 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:13.306151 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:44:13.306272 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:44:13.333689 1450159 cri.go:89] found id: ""
	I1213 15:44:13.333711 1450159 logs.go:282] 0 containers: []
	W1213 15:44:13.333719 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:44:13.333726 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:44:13.333792 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:44:13.381737 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:13.381756 1450159 cri.go:89] found id: ""
	I1213 15:44:13.381764 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:44:13.381820 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:13.386875 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:44:13.386949 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:44:13.414555 1450159 cri.go:89] found id: ""
	I1213 15:44:13.414634 1450159 logs.go:282] 0 containers: []
	W1213 15:44:13.414657 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:44:13.414675 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:44:13.414783 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:44:13.442126 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:13.442145 1450159 cri.go:89] found id: ""
	I1213 15:44:13.442153 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:44:13.442211 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:13.446397 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:44:13.446536 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:44:13.473544 1450159 cri.go:89] found id: ""
	I1213 15:44:13.473621 1450159 logs.go:282] 0 containers: []
	W1213 15:44:13.473649 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:44:13.473682 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:44:13.473773 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:44:13.507903 1450159 cri.go:89] found id: ""
	I1213 15:44:13.507973 1450159 logs.go:282] 0 containers: []
	W1213 15:44:13.507995 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:44:13.508020 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:44:13.508060 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:13.545618 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:44:13.545881 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:13.583292 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:44:13.583446 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:44:13.637785 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:44:13.637861 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:44:13.715638 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:44:13.715715 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:44:13.752824 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:44:13.752852 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:44:13.846420 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:44:13.846481 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:44:13.846516 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:13.917184 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:44:13.917255 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:13.967452 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:44:13.967528 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:44:16.513079 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:44:16.523274 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:44:16.523372 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:44:16.547824 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:16.547847 1450159 cri.go:89] found id: ""
	I1213 15:44:16.547855 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:44:16.547911 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:16.551624 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:44:16.551697 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:44:16.576813 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:16.576835 1450159 cri.go:89] found id: ""
	I1213 15:44:16.576844 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:44:16.576899 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:16.580476 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:44:16.580549 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:44:16.604381 1450159 cri.go:89] found id: ""
	I1213 15:44:16.604406 1450159 logs.go:282] 0 containers: []
	W1213 15:44:16.604415 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:44:16.604421 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:44:16.604482 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:44:16.633298 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:16.633318 1450159 cri.go:89] found id: ""
	I1213 15:44:16.633329 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:44:16.633387 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:16.637270 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:44:16.637343 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:44:16.661987 1450159 cri.go:89] found id: ""
	I1213 15:44:16.662009 1450159 logs.go:282] 0 containers: []
	W1213 15:44:16.662018 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:44:16.662024 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:44:16.662081 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:44:16.691151 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:16.691175 1450159 cri.go:89] found id: ""
	I1213 15:44:16.691183 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:44:16.691241 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:16.696208 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:44:16.696299 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:44:16.727627 1450159 cri.go:89] found id: ""
	I1213 15:44:16.727649 1450159 logs.go:282] 0 containers: []
	W1213 15:44:16.727657 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:44:16.727663 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:44:16.727729 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:44:16.771544 1450159 cri.go:89] found id: ""
	I1213 15:44:16.771566 1450159 logs.go:282] 0 containers: []
	W1213 15:44:16.771574 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:44:16.771628 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:44:16.771639 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:44:16.837055 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:44:16.837095 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:44:16.857146 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:44:16.857189 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:16.894885 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:44:16.894921 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:16.927949 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:44:16.927983 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:44:16.956677 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:44:16.956707 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:44:17.055110 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:44:17.055132 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:44:17.055148 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:17.138642 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:44:17.138678 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:17.189048 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:44:17.189082 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:44:19.739520 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:44:19.749759 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:44:19.749832 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:44:19.779551 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:19.779573 1450159 cri.go:89] found id: ""
	I1213 15:44:19.779581 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:44:19.779637 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:19.783283 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:44:19.783385 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:44:19.807592 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:19.807615 1450159 cri.go:89] found id: ""
	I1213 15:44:19.807624 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:44:19.807680 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:19.811368 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:44:19.811445 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:44:19.835399 1450159 cri.go:89] found id: ""
	I1213 15:44:19.835429 1450159 logs.go:282] 0 containers: []
	W1213 15:44:19.835438 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:44:19.835444 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:44:19.835513 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:44:19.861019 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:19.861043 1450159 cri.go:89] found id: ""
	I1213 15:44:19.861051 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:44:19.861111 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:19.864987 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:44:19.865061 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:44:19.889841 1450159 cri.go:89] found id: ""
	I1213 15:44:19.889865 1450159 logs.go:282] 0 containers: []
	W1213 15:44:19.889874 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:44:19.889880 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:44:19.889938 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:44:19.915173 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:19.915195 1450159 cri.go:89] found id: ""
	I1213 15:44:19.915204 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:44:19.915262 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:19.919173 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:44:19.919257 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:44:19.945816 1450159 cri.go:89] found id: ""
	I1213 15:44:19.945840 1450159 logs.go:282] 0 containers: []
	W1213 15:44:19.945848 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:44:19.945854 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:44:19.945911 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:44:19.970741 1450159 cri.go:89] found id: ""
	I1213 15:44:19.970764 1450159 logs.go:282] 0 containers: []
	W1213 15:44:19.970773 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:44:19.970787 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:44:19.970800 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:44:19.999651 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:44:19.999689 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:44:20.059068 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:44:20.059107 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:44:20.076877 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:44:20.076910 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:44:20.174253 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:44:20.174274 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:44:20.174287 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:44:20.203673 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:44:20.203702 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:20.237727 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:44:20.237760 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:20.271160 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:44:20.271191 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:20.304682 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:44:20.304723 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:22.835902 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:44:22.846239 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:44:22.846306 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:44:22.873625 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:22.873645 1450159 cri.go:89] found id: ""
	I1213 15:44:22.873653 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:44:22.873714 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:22.877896 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:44:22.877968 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:44:22.906620 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:22.906642 1450159 cri.go:89] found id: ""
	I1213 15:44:22.906651 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:44:22.906708 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:22.910489 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:44:22.910560 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:44:22.937679 1450159 cri.go:89] found id: ""
	I1213 15:44:22.937704 1450159 logs.go:282] 0 containers: []
	W1213 15:44:22.937713 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:44:22.937720 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:44:22.937798 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:44:22.963016 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:22.963038 1450159 cri.go:89] found id: ""
	I1213 15:44:22.963047 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:44:22.963102 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:22.966976 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:44:22.967055 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:44:22.993788 1450159 cri.go:89] found id: ""
	I1213 15:44:22.993812 1450159 logs.go:282] 0 containers: []
	W1213 15:44:22.993821 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:44:22.993827 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:44:22.993893 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:44:23.033498 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:23.033522 1450159 cri.go:89] found id: ""
	I1213 15:44:23.033532 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:44:23.033609 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:23.037653 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:44:23.037728 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:44:23.064044 1450159 cri.go:89] found id: ""
	I1213 15:44:23.064070 1450159 logs.go:282] 0 containers: []
	W1213 15:44:23.064078 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:44:23.064085 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:44:23.064172 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:44:23.101869 1450159 cri.go:89] found id: ""
	I1213 15:44:23.101894 1450159 logs.go:282] 0 containers: []
	W1213 15:44:23.101903 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:44:23.101949 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:44:23.101968 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:23.146278 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:44:23.146310 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:23.184985 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:44:23.185018 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:23.219925 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:44:23.219963 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:23.254518 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:44:23.254551 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:44:23.314093 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:44:23.314131 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:44:23.330936 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:44:23.330967 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:44:23.407639 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:44:23.407726 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:44:23.407749 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:44:23.442506 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:44:23.442548 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:44:25.977698 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:44:25.990297 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:44:25.990372 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:44:26.025129 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:26.025151 1450159 cri.go:89] found id: ""
	I1213 15:44:26.025160 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:44:26.025222 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:26.029766 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:44:26.029849 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:44:26.060847 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:26.060872 1450159 cri.go:89] found id: ""
	I1213 15:44:26.060881 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:44:26.060942 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:26.065494 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:44:26.065580 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:44:26.114329 1450159 cri.go:89] found id: ""
	I1213 15:44:26.114365 1450159 logs.go:282] 0 containers: []
	W1213 15:44:26.114384 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:44:26.114397 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:44:26.114509 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:44:26.170337 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:26.170404 1450159 cri.go:89] found id: ""
	I1213 15:44:26.170429 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:44:26.170510 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:26.175429 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:44:26.175511 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:44:26.206391 1450159 cri.go:89] found id: ""
	I1213 15:44:26.206424 1450159 logs.go:282] 0 containers: []
	W1213 15:44:26.206435 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:44:26.206445 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:44:26.206527 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:44:26.234387 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:26.234421 1450159 cri.go:89] found id: ""
	I1213 15:44:26.234431 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:44:26.234500 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:26.238882 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:44:26.238956 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:44:26.267092 1450159 cri.go:89] found id: ""
	I1213 15:44:26.267155 1450159 logs.go:282] 0 containers: []
	W1213 15:44:26.267168 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:44:26.267185 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:44:26.267297 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:44:26.296631 1450159 cri.go:89] found id: ""
	I1213 15:44:26.296658 1450159 logs.go:282] 0 containers: []
	W1213 15:44:26.296666 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:44:26.296681 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:44:26.296692 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:26.347783 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:44:26.347822 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:26.385535 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:44:26.385569 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:26.421107 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:44:26.421143 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:44:26.438045 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:44:26.438078 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:26.476227 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:44:26.476264 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:44:26.505473 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:44:26.505514 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:44:26.534659 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:44:26.534689 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:44:26.597936 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:44:26.597972 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:44:26.671455 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:44:29.171675 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:44:29.183718 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:44:29.183792 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:44:29.210623 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:29.210647 1450159 cri.go:89] found id: ""
	I1213 15:44:29.210657 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:44:29.210716 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:29.214724 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:44:29.214797 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:44:29.240327 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:29.240350 1450159 cri.go:89] found id: ""
	I1213 15:44:29.240359 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:44:29.240419 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:29.244294 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:44:29.244405 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:44:29.268989 1450159 cri.go:89] found id: ""
	I1213 15:44:29.269015 1450159 logs.go:282] 0 containers: []
	W1213 15:44:29.269037 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:44:29.269044 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:44:29.269111 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:44:29.299454 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:29.299477 1450159 cri.go:89] found id: ""
	I1213 15:44:29.299485 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:44:29.299544 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:29.303229 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:44:29.303298 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:44:29.327679 1450159 cri.go:89] found id: ""
	I1213 15:44:29.327705 1450159 logs.go:282] 0 containers: []
	W1213 15:44:29.327714 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:44:29.327721 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:44:29.327781 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:44:29.354497 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:29.354520 1450159 cri.go:89] found id: ""
	I1213 15:44:29.354529 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:44:29.354587 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:29.359702 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:44:29.359773 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:44:29.386616 1450159 cri.go:89] found id: ""
	I1213 15:44:29.386643 1450159 logs.go:282] 0 containers: []
	W1213 15:44:29.386653 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:44:29.386660 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:44:29.386723 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:44:29.416478 1450159 cri.go:89] found id: ""
	I1213 15:44:29.416503 1450159 logs.go:282] 0 containers: []
	W1213 15:44:29.416512 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:44:29.416525 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:44:29.416546 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:44:29.435108 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:44:29.435135 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:44:29.502023 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:44:29.502045 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:44:29.502060 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:29.537200 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:44:29.537239 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:29.570437 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:44:29.570473 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:29.612379 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:44:29.612414 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:44:29.643097 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:44:29.643134 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:44:29.707856 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:44:29.707912 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:29.738536 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:44:29.738569 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:44:32.270086 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:44:32.281020 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:44:32.281095 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:44:32.306332 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:32.306354 1450159 cri.go:89] found id: ""
	I1213 15:44:32.306363 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:44:32.306421 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:32.310319 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:44:32.310393 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:44:32.334814 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:32.334834 1450159 cri.go:89] found id: ""
	I1213 15:44:32.334843 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:44:32.334901 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:32.338797 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:44:32.338925 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:44:32.375945 1450159 cri.go:89] found id: ""
	I1213 15:44:32.375971 1450159 logs.go:282] 0 containers: []
	W1213 15:44:32.375979 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:44:32.375986 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:44:32.376044 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:44:32.401232 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:32.401300 1450159 cri.go:89] found id: ""
	I1213 15:44:32.401316 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:44:32.401376 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:32.405383 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:44:32.405485 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:44:32.436446 1450159 cri.go:89] found id: ""
	I1213 15:44:32.436524 1450159 logs.go:282] 0 containers: []
	W1213 15:44:32.436544 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:44:32.436554 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:44:32.436634 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:44:32.461299 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:32.461323 1450159 cri.go:89] found id: ""
	I1213 15:44:32.461331 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:44:32.461408 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:32.466024 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:44:32.466127 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:44:32.495443 1450159 cri.go:89] found id: ""
	I1213 15:44:32.495466 1450159 logs.go:282] 0 containers: []
	W1213 15:44:32.495474 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:44:32.495481 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:44:32.495570 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:44:32.525990 1450159 cri.go:89] found id: ""
	I1213 15:44:32.526021 1450159 logs.go:282] 0 containers: []
	W1213 15:44:32.526030 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:44:32.526045 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:44:32.526078 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:44:32.583231 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:44:32.583267 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:44:32.600571 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:44:32.600601 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:44:32.668822 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:44:32.668886 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:44:32.668906 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:32.703117 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:44:32.703151 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:32.739529 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:44:32.739566 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:32.781761 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:44:32.781807 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:44:32.812367 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:44:32.812408 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:32.855753 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:44:32.855792 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:44:35.387865 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:44:35.398722 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:44:35.398807 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:44:35.424182 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:35.424203 1450159 cri.go:89] found id: ""
	I1213 15:44:35.424211 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:44:35.424278 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:35.428214 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:44:35.428302 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:44:35.454489 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:35.454512 1450159 cri.go:89] found id: ""
	I1213 15:44:35.454520 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:44:35.454581 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:35.458383 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:44:35.458478 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:44:35.483460 1450159 cri.go:89] found id: ""
	I1213 15:44:35.483485 1450159 logs.go:282] 0 containers: []
	W1213 15:44:35.483493 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:44:35.483500 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:44:35.483585 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:44:35.513663 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:35.513686 1450159 cri.go:89] found id: ""
	I1213 15:44:35.513695 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:44:35.513758 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:35.517806 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:44:35.517887 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:44:35.542656 1450159 cri.go:89] found id: ""
	I1213 15:44:35.542682 1450159 logs.go:282] 0 containers: []
	W1213 15:44:35.542691 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:44:35.542697 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:44:35.542764 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:44:35.573733 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:35.573756 1450159 cri.go:89] found id: ""
	I1213 15:44:35.573765 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:44:35.573821 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:35.577705 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:44:35.577782 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:44:35.602207 1450159 cri.go:89] found id: ""
	I1213 15:44:35.602233 1450159 logs.go:282] 0 containers: []
	W1213 15:44:35.602241 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:44:35.602247 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:44:35.602305 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:44:35.627259 1450159 cri.go:89] found id: ""
	I1213 15:44:35.627284 1450159 logs.go:282] 0 containers: []
	W1213 15:44:35.627294 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:44:35.627335 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:44:35.627348 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:44:35.655900 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:44:35.655935 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:44:35.719176 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:44:35.719211 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:44:35.786732 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:44:35.786754 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:44:35.786768 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:35.819115 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:44:35.819148 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:35.874307 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:44:35.874412 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:44:35.907003 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:44:35.907030 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:44:35.924338 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:44:35.924369 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:35.958539 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:44:35.958573 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:38.494916 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:44:38.505310 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:44:38.505382 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:44:38.530883 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:38.530904 1450159 cri.go:89] found id: ""
	I1213 15:44:38.530915 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:44:38.530972 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:38.534830 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:44:38.534905 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:44:38.560595 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:38.560625 1450159 cri.go:89] found id: ""
	I1213 15:44:38.560634 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:44:38.560697 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:38.564766 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:44:38.564855 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:44:38.589068 1450159 cri.go:89] found id: ""
	I1213 15:44:38.589143 1450159 logs.go:282] 0 containers: []
	W1213 15:44:38.589165 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:44:38.589179 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:44:38.589259 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:44:38.621418 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:38.621441 1450159 cri.go:89] found id: ""
	I1213 15:44:38.621449 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:44:38.621505 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:38.626286 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:44:38.626358 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:44:38.654903 1450159 cri.go:89] found id: ""
	I1213 15:44:38.654929 1450159 logs.go:282] 0 containers: []
	W1213 15:44:38.654937 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:44:38.654943 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:44:38.655002 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:44:38.687974 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:38.687993 1450159 cri.go:89] found id: ""
	I1213 15:44:38.688007 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:44:38.688061 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:38.696678 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:44:38.696792 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:44:38.728384 1450159 cri.go:89] found id: ""
	I1213 15:44:38.728447 1450159 logs.go:282] 0 containers: []
	W1213 15:44:38.728471 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:44:38.728488 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:44:38.728573 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:44:38.768665 1450159 cri.go:89] found id: ""
	I1213 15:44:38.768738 1450159 logs.go:282] 0 containers: []
	W1213 15:44:38.768751 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:44:38.768765 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:44:38.768777 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:44:38.878191 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:44:38.878210 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:44:38.878223 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:38.955221 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:44:38.955296 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:38.999367 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:44:38.999455 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:39.054174 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:44:39.054253 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:44:39.104427 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:44:39.104454 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:44:39.173078 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:44:39.173155 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:44:39.194368 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:44:39.194396 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:39.246128 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:44:39.246201 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:44:41.779482 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:44:41.790257 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:44:41.790329 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:44:41.815498 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:41.815523 1450159 cri.go:89] found id: ""
	I1213 15:44:41.815531 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:44:41.815588 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:41.819530 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:44:41.819606 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:44:41.855972 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:41.855993 1450159 cri.go:89] found id: ""
	I1213 15:44:41.856002 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:44:41.856056 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:41.861434 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:44:41.861508 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:44:41.895179 1450159 cri.go:89] found id: ""
	I1213 15:44:41.895205 1450159 logs.go:282] 0 containers: []
	W1213 15:44:41.895213 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:44:41.895219 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:44:41.895280 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:44:41.925014 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:41.925037 1450159 cri.go:89] found id: ""
	I1213 15:44:41.925047 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:44:41.925104 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:41.929070 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:44:41.929146 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:44:41.953929 1450159 cri.go:89] found id: ""
	I1213 15:44:41.953954 1450159 logs.go:282] 0 containers: []
	W1213 15:44:41.953963 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:44:41.953969 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:44:41.954049 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:44:41.983445 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:41.983468 1450159 cri.go:89] found id: ""
	I1213 15:44:41.983476 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:44:41.983531 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:41.987283 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:44:41.987387 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:44:42.016284 1450159 cri.go:89] found id: ""
	I1213 15:44:42.016358 1450159 logs.go:282] 0 containers: []
	W1213 15:44:42.016374 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:44:42.016383 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:44:42.016452 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:44:42.046206 1450159 cri.go:89] found id: ""
	I1213 15:44:42.046231 1450159 logs.go:282] 0 containers: []
	W1213 15:44:42.046239 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:44:42.046255 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:44:42.046274 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:42.094941 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:44:42.094974 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:42.141060 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:44:42.141111 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:44:42.163147 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:44:42.163258 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:42.222952 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:44:42.222996 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:44:42.259476 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:44:42.259528 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:44:42.293975 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:44:42.294006 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:44:42.359793 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:44:42.359844 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:44:42.427700 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:44:42.427724 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:44:42.427737 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:44.960226 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:44:44.972356 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:44:44.972431 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:44:44.998878 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:44.998900 1450159 cri.go:89] found id: ""
	I1213 15:44:44.998909 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:44:44.998969 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:45.010528 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:44:45.010624 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:44:45.141487 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:45.141512 1450159 cri.go:89] found id: ""
	I1213 15:44:45.141522 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:44:45.141592 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:45.147253 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:44:45.147423 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:44:45.217389 1450159 cri.go:89] found id: ""
	I1213 15:44:45.217469 1450159 logs.go:282] 0 containers: []
	W1213 15:44:45.217493 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:44:45.217514 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:44:45.217631 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:44:45.294386 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:45.294560 1450159 cri.go:89] found id: ""
	I1213 15:44:45.294664 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:44:45.294788 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:45.302962 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:44:45.303540 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:44:45.340915 1450159 cri.go:89] found id: ""
	I1213 15:44:45.340941 1450159 logs.go:282] 0 containers: []
	W1213 15:44:45.340949 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:44:45.340956 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:44:45.341049 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:44:45.377386 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:45.377409 1450159 cri.go:89] found id: ""
	I1213 15:44:45.377417 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:44:45.377478 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:45.381571 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:44:45.381654 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:44:45.412521 1450159 cri.go:89] found id: ""
	I1213 15:44:45.412550 1450159 logs.go:282] 0 containers: []
	W1213 15:44:45.412559 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:44:45.412565 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:44:45.412677 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:44:45.439976 1450159 cri.go:89] found id: ""
	I1213 15:44:45.440068 1450159 logs.go:282] 0 containers: []
	W1213 15:44:45.440112 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:44:45.440177 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:44:45.440211 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:44:45.504346 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:44:45.504384 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:45.547503 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:44:45.547539 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:45.585757 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:44:45.585802 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:44:45.619222 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:44:45.619305 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:44:45.662575 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:44:45.662600 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:44:45.680974 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:44:45.681004 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:44:45.752585 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:44:45.752610 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:44:45.752623 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:45.791685 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:44:45.791722 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:48.323756 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:44:48.334104 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:44:48.334210 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:44:48.368174 1450159 cri.go:89] found id: "bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:48.368199 1450159 cri.go:89] found id: ""
	I1213 15:44:48.368207 1450159 logs.go:282] 1 containers: [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6]
	I1213 15:44:48.368285 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:48.372324 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:44:48.372400 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:44:48.398149 1450159 cri.go:89] found id: "45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:48.398173 1450159 cri.go:89] found id: ""
	I1213 15:44:48.398182 1450159 logs.go:282] 1 containers: [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07]
	I1213 15:44:48.398241 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:48.402226 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:44:48.402310 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:44:48.428389 1450159 cri.go:89] found id: ""
	I1213 15:44:48.428425 1450159 logs.go:282] 0 containers: []
	W1213 15:44:48.428435 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:44:48.428441 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:44:48.428522 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:44:48.455125 1450159 cri.go:89] found id: "85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:48.455147 1450159 cri.go:89] found id: ""
	I1213 15:44:48.455156 1450159 logs.go:282] 1 containers: [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba]
	I1213 15:44:48.455215 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:48.459090 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:44:48.459168 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:44:48.483687 1450159 cri.go:89] found id: ""
	I1213 15:44:48.483711 1450159 logs.go:282] 0 containers: []
	W1213 15:44:48.483720 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:44:48.483726 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:44:48.483796 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:44:48.509510 1450159 cri.go:89] found id: "d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:48.509533 1450159 cri.go:89] found id: ""
	I1213 15:44:48.509542 1450159 logs.go:282] 1 containers: [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b]
	I1213 15:44:48.509601 1450159 ssh_runner.go:195] Run: which crictl
	I1213 15:44:48.513660 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:44:48.513760 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:44:48.543685 1450159 cri.go:89] found id: ""
	I1213 15:44:48.543713 1450159 logs.go:282] 0 containers: []
	W1213 15:44:48.543722 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:44:48.543729 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:44:48.543795 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:44:48.573825 1450159 cri.go:89] found id: ""
	I1213 15:44:48.573849 1450159 logs.go:282] 0 containers: []
	W1213 15:44:48.573857 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:44:48.573871 1450159 logs.go:123] Gathering logs for kube-controller-manager [d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b] ...
	I1213 15:44:48.573883 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b"
	I1213 15:44:48.615085 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:44:48.615122 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 15:44:48.660192 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:44:48.660276 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:44:48.680914 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:44:48.680942 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:44:48.752765 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:44:48.752787 1450159 logs.go:123] Gathering logs for kube-apiserver [bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6] ...
	I1213 15:44:48.752800 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6"
	I1213 15:44:48.788829 1450159 logs.go:123] Gathering logs for etcd [45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07] ...
	I1213 15:44:48.788863 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07"
	I1213 15:44:48.820593 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:44:48.820624 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:44:48.851067 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:44:48.851104 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:44:48.909671 1450159 logs.go:123] Gathering logs for kube-scheduler [85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba] ...
	I1213 15:44:48.909706 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba"
	I1213 15:44:51.445550 1450159 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:44:51.455868 1450159 kubeadm.go:602] duration metric: took 4m3.96166319s to restartPrimaryControlPlane
	W1213 15:44:51.455941 1450159 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 15:44:51.456006 1450159 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 15:44:51.939019 1450159 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 15:44:51.952701 1450159 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 15:44:51.961122 1450159 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 15:44:51.961198 1450159 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 15:44:51.969470 1450159 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 15:44:51.969493 1450159 kubeadm.go:158] found existing configuration files:
	
	I1213 15:44:51.969545 1450159 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 15:44:51.977397 1450159 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 15:44:51.977465 1450159 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 15:44:51.985046 1450159 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 15:44:51.992938 1450159 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 15:44:51.993009 1450159 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 15:44:52.000555 1450159 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 15:44:52.014123 1450159 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 15:44:52.014197 1450159 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 15:44:52.022778 1450159 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 15:44:52.031417 1450159 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 15:44:52.031489 1450159 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 15:44:52.040262 1450159 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 15:44:52.085000 1450159 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 15:44:52.085074 1450159 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 15:44:52.168926 1450159 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 15:44:52.169005 1450159 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 15:44:52.169048 1450159 kubeadm.go:319] OS: Linux
	I1213 15:44:52.169097 1450159 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 15:44:52.169149 1450159 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 15:44:52.169198 1450159 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 15:44:52.169251 1450159 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 15:44:52.169301 1450159 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 15:44:52.169353 1450159 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 15:44:52.169401 1450159 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 15:44:52.169452 1450159 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 15:44:52.169503 1450159 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 15:44:52.241045 1450159 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 15:44:52.241160 1450159 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 15:44:52.241256 1450159 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 15:45:02.254466 1450159 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 15:45:02.257434 1450159 out.go:252]   - Generating certificates and keys ...
	I1213 15:45:02.257542 1450159 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 15:45:02.257615 1450159 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 15:45:02.257697 1450159 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 15:45:02.257762 1450159 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 15:45:02.257836 1450159 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 15:45:02.258189 1450159 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 15:45:02.258854 1450159 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 15:45:02.259473 1450159 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 15:45:02.260088 1450159 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 15:45:02.260694 1450159 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 15:45:02.263183 1450159 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 15:45:02.263514 1450159 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 15:45:02.668549 1450159 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 15:45:02.784297 1450159 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 15:45:02.850996 1450159 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 15:45:03.257993 1450159 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 15:45:03.668238 1450159 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 15:45:03.668835 1450159 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 15:45:03.671560 1450159 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 15:45:03.674843 1450159 out.go:252]   - Booting up control plane ...
	I1213 15:45:03.674951 1450159 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 15:45:03.675029 1450159 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 15:45:03.675096 1450159 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 15:45:03.696357 1450159 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 15:45:03.696468 1450159 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 15:45:03.704501 1450159 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 15:45:03.705067 1450159 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 15:45:03.705169 1450159 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 15:45:03.841334 1450159 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 15:45:03.841459 1450159 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 15:49:03.842615 1450159 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001399865s
	I1213 15:49:03.842650 1450159 kubeadm.go:319] 
	I1213 15:49:03.842708 1450159 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 15:49:03.842741 1450159 kubeadm.go:319] 	- The kubelet is not running
	I1213 15:49:03.842846 1450159 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 15:49:03.842852 1450159 kubeadm.go:319] 
	I1213 15:49:03.842957 1450159 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 15:49:03.842991 1450159 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 15:49:03.843022 1450159 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 15:49:03.843026 1450159 kubeadm.go:319] 
	I1213 15:49:03.847488 1450159 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 15:49:03.847911 1450159 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 15:49:03.848019 1450159 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 15:49:03.848292 1450159 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 15:49:03.848299 1450159 kubeadm.go:319] 
	I1213 15:49:03.848370 1450159 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 15:49:03.848482 1450159 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001399865s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001399865s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 15:49:03.848560 1450159 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 15:49:04.277132 1450159 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 15:49:04.295430 1450159 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 15:49:04.295496 1450159 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 15:49:04.310966 1450159 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 15:49:04.310987 1450159 kubeadm.go:158] found existing configuration files:
	
	I1213 15:49:04.311040 1450159 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 15:49:04.320375 1450159 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 15:49:04.320441 1450159 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 15:49:04.330856 1450159 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 15:49:04.340514 1450159 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 15:49:04.340580 1450159 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 15:49:04.349803 1450159 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 15:49:04.361758 1450159 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 15:49:04.361818 1450159 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 15:49:04.371426 1450159 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 15:49:04.383428 1450159 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 15:49:04.383543 1450159 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 15:49:04.400759 1450159 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 15:49:04.475467 1450159 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 15:49:04.475940 1450159 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 15:49:04.577013 1450159 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 15:49:04.577171 1450159 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 15:49:04.577223 1450159 kubeadm.go:319] OS: Linux
	I1213 15:49:04.577276 1450159 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 15:49:04.577335 1450159 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 15:49:04.577386 1450159 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 15:49:04.577476 1450159 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 15:49:04.577565 1450159 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 15:49:04.577642 1450159 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 15:49:04.577723 1450159 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 15:49:04.577799 1450159 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 15:49:04.577880 1450159 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 15:49:04.686374 1450159 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 15:49:04.686546 1450159 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 15:49:04.686672 1450159 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 15:49:04.697734 1450159 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 15:49:04.703794 1450159 out.go:252]   - Generating certificates and keys ...
	I1213 15:49:04.703923 1450159 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 15:49:04.704008 1450159 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 15:49:04.704103 1450159 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 15:49:04.704167 1450159 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 15:49:04.704297 1450159 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 15:49:04.704366 1450159 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 15:49:04.704454 1450159 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 15:49:04.704525 1450159 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 15:49:04.704987 1450159 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 15:49:04.706545 1450159 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 15:49:04.706685 1450159 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 15:49:04.706823 1450159 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 15:49:04.925675 1450159 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 15:49:05.174948 1450159 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 15:49:05.392641 1450159 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 15:49:05.691266 1450159 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 15:49:05.822448 1450159 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 15:49:05.822554 1450159 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 15:49:05.831774 1450159 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 15:49:05.836913 1450159 out.go:252]   - Booting up control plane ...
	I1213 15:49:05.837019 1450159 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 15:49:05.837103 1450159 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 15:49:05.838545 1450159 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 15:49:05.861203 1450159 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 15:49:05.861338 1450159 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 15:49:05.878026 1450159 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 15:49:05.878128 1450159 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 15:49:05.878168 1450159 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 15:49:06.094322 1450159 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 15:49:06.094445 1450159 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 15:53:06.096078 1450159 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001945602s
	I1213 15:53:06.097403 1450159 kubeadm.go:319] 
	I1213 15:53:06.097525 1450159 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 15:53:06.097577 1450159 kubeadm.go:319] 	- The kubelet is not running
	I1213 15:53:06.097681 1450159 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 15:53:06.097687 1450159 kubeadm.go:319] 
	I1213 15:53:06.097791 1450159 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 15:53:06.097822 1450159 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 15:53:06.097853 1450159 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 15:53:06.097857 1450159 kubeadm.go:319] 
	I1213 15:53:06.102245 1450159 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 15:53:06.102671 1450159 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 15:53:06.102788 1450159 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 15:53:06.103027 1450159 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 15:53:06.103037 1450159 kubeadm.go:319] 
	I1213 15:53:06.103108 1450159 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 15:53:06.103171 1450159 kubeadm.go:403] duration metric: took 12m18.665565751s to StartCluster
	I1213 15:53:06.103211 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:53:06.103299 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:53:06.130642 1450159 cri.go:89] found id: ""
	I1213 15:53:06.130664 1450159 logs.go:282] 0 containers: []
	W1213 15:53:06.130672 1450159 logs.go:284] No container was found matching "kube-apiserver"
	I1213 15:53:06.130678 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:53:06.130757 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:53:06.155752 1450159 cri.go:89] found id: ""
	I1213 15:53:06.155777 1450159 logs.go:282] 0 containers: []
	W1213 15:53:06.155786 1450159 logs.go:284] No container was found matching "etcd"
	I1213 15:53:06.155792 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:53:06.155875 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:53:06.180803 1450159 cri.go:89] found id: ""
	I1213 15:53:06.180828 1450159 logs.go:282] 0 containers: []
	W1213 15:53:06.180837 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:53:06.180847 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:53:06.180907 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:53:06.206566 1450159 cri.go:89] found id: ""
	I1213 15:53:06.206594 1450159 logs.go:282] 0 containers: []
	W1213 15:53:06.206603 1450159 logs.go:284] No container was found matching "kube-scheduler"
	I1213 15:53:06.206609 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:53:06.206695 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:53:06.231426 1450159 cri.go:89] found id: ""
	I1213 15:53:06.231458 1450159 logs.go:282] 0 containers: []
	W1213 15:53:06.231468 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:53:06.231474 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:53:06.231566 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:53:06.256941 1450159 cri.go:89] found id: ""
	I1213 15:53:06.256964 1450159 logs.go:282] 0 containers: []
	W1213 15:53:06.256972 1450159 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 15:53:06.256979 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:53:06.257041 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:53:06.281418 1450159 cri.go:89] found id: ""
	I1213 15:53:06.281442 1450159 logs.go:282] 0 containers: []
	W1213 15:53:06.281451 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:53:06.281457 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:53:06.281516 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:53:06.307575 1450159 cri.go:89] found id: ""
	I1213 15:53:06.307602 1450159 logs.go:282] 0 containers: []
	W1213 15:53:06.307610 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:53:06.307621 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:53:06.307635 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:53:06.367685 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:53:06.367771 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:53:06.393106 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:53:06.393134 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:53:06.456236 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:53:06.456308 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:53:06.456334 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:53:06.502009 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:53:06.502045 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 15:53:06.531909 1450159 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001945602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 15:53:06.531954 1450159 out.go:285] * 
	* 
	W1213 15:53:06.532033 1450159 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001945602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001945602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 15:53:06.532220 1450159 out.go:285] * 
	* 
	W1213 15:53:06.534619 1450159 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 15:53:06.541372 1450159 out.go:203] 
	W1213 15:53:06.544133 1450159 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001945602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001945602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 15:53:06.544189 1450159 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 15:53:06.544223 1450159 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 15:53:06.547500 1450159 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-098313 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-098313 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-098313 version --output=json: exit status 1 (88.61755ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-13 15:53:07.335795604 +0000 UTC m=+4937.100433040
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-098313
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-098313:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dd4b13295d4cdcc47446ff97e71e50fc408e32e002ebec5037af9fba06bd586f",
	        "Created": "2025-12-13T15:39:59.548040125Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1450286,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T15:40:32.682096766Z",
	            "FinishedAt": "2025-12-13T15:40:31.527626295Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/dd4b13295d4cdcc47446ff97e71e50fc408e32e002ebec5037af9fba06bd586f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dd4b13295d4cdcc47446ff97e71e50fc408e32e002ebec5037af9fba06bd586f/hostname",
	        "HostsPath": "/var/lib/docker/containers/dd4b13295d4cdcc47446ff97e71e50fc408e32e002ebec5037af9fba06bd586f/hosts",
	        "LogPath": "/var/lib/docker/containers/dd4b13295d4cdcc47446ff97e71e50fc408e32e002ebec5037af9fba06bd586f/dd4b13295d4cdcc47446ff97e71e50fc408e32e002ebec5037af9fba06bd586f-json.log",
	        "Name": "/kubernetes-upgrade-098313",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-098313:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-098313",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dd4b13295d4cdcc47446ff97e71e50fc408e32e002ebec5037af9fba06bd586f",
	                "LowerDir": "/var/lib/docker/overlay2/3a95b2f06338ff19f0cb2ec584d698a1635d3fe68b9d61565e79f8156ace1c6a-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a95b2f06338ff19f0cb2ec584d698a1635d3fe68b9d61565e79f8156ace1c6a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a95b2f06338ff19f0cb2ec584d698a1635d3fe68b9d61565e79f8156ace1c6a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a95b2f06338ff19f0cb2ec584d698a1635d3fe68b9d61565e79f8156ace1c6a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-098313",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-098313/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-098313",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-098313",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-098313",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d9b189d8031aa85c16a35e8d5793c6212f995626cfe0742bbbe10b698aef1dbd",
	            "SandboxKey": "/var/run/docker/netns/d9b189d8031a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34145"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34146"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-098313": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:e4:cb:c4:eb:7b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "176044c8d04c30f1b56581b0efeef4371a8c93b9dd73e01aa98aabfbc7825089",
	                    "EndpointID": "f42bdf042b333dbad1744a080f8fe994ca344bcdfddabb6fa21f928c74d201b6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-098313",
	                        "dd4b13295d4c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-098313 -n kubernetes-upgrade-098313
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-098313 -n kubernetes-upgrade-098313: exit status 2 (341.241491ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-098313 logs -n 25
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                       │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-023791 sudo cat /etc/kubernetes/kubelet.conf                                                           │ cilium-023791            │ jenkins │ v1.37.0 │ 13 Dec 25 15:49 UTC │                     │
	│ ssh     │ -p cilium-023791 sudo cat /var/lib/kubelet/config.yaml                                                           │ cilium-023791            │ jenkins │ v1.37.0 │ 13 Dec 25 15:49 UTC │                     │
	│ ssh     │ -p cilium-023791 sudo systemctl status docker --all --full --no-pager                                            │ cilium-023791            │ jenkins │ v1.37.0 │ 13 Dec 25 15:49 UTC │                     │
	│ ssh     │ -p cilium-023791 sudo systemctl cat docker --no-pager                                                            │ cilium-023791            │ jenkins │ v1.37.0 │ 13 Dec 25 15:49 UTC │                     │
	│ ssh     │ -p cilium-023791 sudo cat /etc/docker/daemon.json                                                                │ cilium-023791            │ jenkins │ v1.37.0 │ 13 Dec 25 15:49 UTC │                     │
	│ ssh     │ -p cilium-023791 sudo docker system info                                                                         │ cilium-023791            │ jenkins │ v1.37.0 │ 13 Dec 25 15:49 UTC │                     │
	│ ssh     │ -p cilium-023791 sudo systemctl status cri-docker --all --full --no-pager                                        │ cilium-023791            │ jenkins │ v1.37.0 │ 13 Dec 25 15:49 UTC │                     │
	│ ssh     │ -p cilium-023791 sudo systemctl cat cri-docker --no-pager                                                        │ cilium-023791            │ jenkins │ v1.37.0 │ 13 Dec 25 15:49 UTC │                     │
	│ ssh     │ -p cilium-023791 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                   │ cilium-023791            │ jenkins │ v1.37.0 │ 13 Dec 25 15:49 UTC │                     │
	│ ssh     │ -p cilium-023791 sudo cat /usr/lib/systemd/system/cri-docker.service                                             │ cilium-023791            │ jenkins │ v1.37.0 │ 13 Dec 25 15:49 UTC │                     │
	│ ssh     │ -p cilium-023791 sudo cri-dockerd --version                                                                      │ cilium-023791            │ jenkins │ v1.37.0 │ 13 Dec 25 15:49 UTC │                     │
	│ ssh     │ -p cilium-023791 sudo systemctl status containerd --all --full --no-pager                                        │ cilium-023791            │ jenkins │ v1.37.0 │ 13 Dec 25 15:49 UTC │                     │
	│ ssh     │ -p cilium-023791 sudo systemctl cat containerd --no-pager                                                        │ cilium-023791            │ jenkins │ v1.37.0 │ 13 Dec 25 15:49 UTC │                     │
	│ ssh     │ -p cilium-023791 sudo cat /lib/systemd/system/containerd.service                                                 │ cilium-023791            │ jenkins │ v1.37.0 │ 13 Dec 25 15:49 UTC │                     │
	│ ssh     │ -p cilium-023791 sudo cat /etc/containerd/config.toml                                                            │ cilium-023791            │ jenkins │ v1.37.0 │ 13 Dec 25 15:49 UTC │                     │
	│ ssh     │ -p cilium-023791 sudo containerd config dump                                                                     │ cilium-023791            │ jenkins │ v1.37.0 │ 13 Dec 25 15:49 UTC │                     │
	│ ssh     │ -p cilium-023791 sudo systemctl status crio --all --full --no-pager                                              │ cilium-023791            │ jenkins │ v1.37.0 │ 13 Dec 25 15:49 UTC │                     │
	│ ssh     │ -p cilium-023791 sudo systemctl cat crio --no-pager                                                              │ cilium-023791            │ jenkins │ v1.37.0 │ 13 Dec 25 15:49 UTC │                     │
	│ ssh     │ -p cilium-023791 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                    │ cilium-023791            │ jenkins │ v1.37.0 │ 13 Dec 25 15:49 UTC │                     │
	│ ssh     │ -p cilium-023791 sudo crio config                                                                                │ cilium-023791            │ jenkins │ v1.37.0 │ 13 Dec 25 15:49 UTC │                     │
	│ delete  │ -p cilium-023791                                                                                                 │ cilium-023791            │ jenkins │ v1.37.0 │ 13 Dec 25 15:49 UTC │ 13 Dec 25 15:49 UTC │
	│ start   │ -p force-systemd-env-206382 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd │ force-systemd-env-206382 │ jenkins │ v1.37.0 │ 13 Dec 25 15:49 UTC │ 13 Dec 25 15:50 UTC │
	│ ssh     │ force-systemd-env-206382 ssh cat /etc/containerd/config.toml                                                     │ force-systemd-env-206382 │ jenkins │ v1.37.0 │ 13 Dec 25 15:50 UTC │ 13 Dec 25 15:50 UTC │
	│ delete  │ -p force-systemd-env-206382                                                                                      │ force-systemd-env-206382 │ jenkins │ v1.37.0 │ 13 Dec 25 15:50 UTC │ 13 Dec 25 15:50 UTC │
	│ start   │ -p cert-expiration-652483 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd     │ cert-expiration-652483   │ jenkins │ v1.37.0 │ 13 Dec 25 15:50 UTC │ 13 Dec 25 15:51 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 15:50:29
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 15:50:29.533982 1490314 out.go:360] Setting OutFile to fd 1 ...
	I1213 15:50:29.534090 1490314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:50:29.534094 1490314 out.go:374] Setting ErrFile to fd 2...
	I1213 15:50:29.534097 1490314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:50:29.534345 1490314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 15:50:29.534728 1490314 out.go:368] Setting JSON to false
	I1213 15:50:29.535628 1490314 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":27178,"bootTime":1765613851,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 15:50:29.535687 1490314 start.go:143] virtualization:  
	I1213 15:50:29.540069 1490314 out.go:179] * [cert-expiration-652483] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 15:50:29.544537 1490314 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 15:50:29.544617 1490314 notify.go:221] Checking for updates...
	I1213 15:50:29.551340 1490314 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 15:50:29.554668 1490314 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 15:50:29.557879 1490314 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 15:50:29.560987 1490314 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 15:50:29.564204 1490314 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 15:50:29.567839 1490314 config.go:182] Loaded profile config "kubernetes-upgrade-098313": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 15:50:29.567962 1490314 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 15:50:29.604416 1490314 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 15:50:29.604543 1490314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 15:50:29.668698 1490314 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 15:50:29.659192524 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 15:50:29.668790 1490314 docker.go:319] overlay module found
	I1213 15:50:29.674005 1490314 out.go:179] * Using the docker driver based on user configuration
	I1213 15:50:29.677075 1490314 start.go:309] selected driver: docker
	I1213 15:50:29.677088 1490314 start.go:927] validating driver "docker" against <nil>
	I1213 15:50:29.677116 1490314 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 15:50:29.677887 1490314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 15:50:29.735357 1490314 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 15:50:29.725419406 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 15:50:29.735500 1490314 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 15:50:29.735709 1490314 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 15:50:29.738780 1490314 out.go:179] * Using Docker driver with root privileges
	I1213 15:50:29.741671 1490314 cni.go:84] Creating CNI manager for ""
	I1213 15:50:29.741735 1490314 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 15:50:29.741746 1490314 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 15:50:29.741829 1490314 start.go:353] cluster config:
	{Name:cert-expiration-652483 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-652483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 15:50:29.745108 1490314 out.go:179] * Starting "cert-expiration-652483" primary control-plane node in "cert-expiration-652483" cluster
	I1213 15:50:29.748008 1490314 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 15:50:29.750925 1490314 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 15:50:29.753814 1490314 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 15:50:29.753849 1490314 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1213 15:50:29.753858 1490314 cache.go:65] Caching tarball of preloaded images
	I1213 15:50:29.753925 1490314 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 15:50:29.753945 1490314 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 15:50:29.753954 1490314 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1213 15:50:29.754083 1490314 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/config.json ...
	I1213 15:50:29.754102 1490314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/config.json: {Name:mk5cf9538db73b2c9f97f8b2d5b5d09b29bf78ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 15:50:29.774387 1490314 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 15:50:29.774399 1490314 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 15:50:29.774418 1490314 cache.go:243] Successfully downloaded all kic artifacts
	I1213 15:50:29.774448 1490314 start.go:360] acquireMachinesLock for cert-expiration-652483: {Name:mk262c0be8e89226a798926ab225ee2b586f8b40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 15:50:29.774560 1490314 start.go:364] duration metric: took 98.262µs to acquireMachinesLock for "cert-expiration-652483"
	I1213 15:50:29.774585 1490314 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-652483 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-652483 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 15:50:29.774651 1490314 start.go:125] createHost starting for "" (driver="docker")
	I1213 15:50:29.778083 1490314 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 15:50:29.778321 1490314 start.go:159] libmachine.API.Create for "cert-expiration-652483" (driver="docker")
	I1213 15:50:29.778358 1490314 client.go:173] LocalClient.Create starting
	I1213 15:50:29.778443 1490314 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem
	I1213 15:50:29.778479 1490314 main.go:143] libmachine: Decoding PEM data...
	I1213 15:50:29.778497 1490314 main.go:143] libmachine: Parsing certificate...
	I1213 15:50:29.778550 1490314 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem
	I1213 15:50:29.778567 1490314 main.go:143] libmachine: Decoding PEM data...
	I1213 15:50:29.778577 1490314 main.go:143] libmachine: Parsing certificate...
	I1213 15:50:29.778964 1490314 cli_runner.go:164] Run: docker network inspect cert-expiration-652483 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 15:50:29.795736 1490314 cli_runner.go:211] docker network inspect cert-expiration-652483 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 15:50:29.795826 1490314 network_create.go:284] running [docker network inspect cert-expiration-652483] to gather additional debugging logs...
	I1213 15:50:29.795842 1490314 cli_runner.go:164] Run: docker network inspect cert-expiration-652483
	W1213 15:50:29.811873 1490314 cli_runner.go:211] docker network inspect cert-expiration-652483 returned with exit code 1
	I1213 15:50:29.811907 1490314 network_create.go:287] error running [docker network inspect cert-expiration-652483]: docker network inspect cert-expiration-652483: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-652483 not found
	I1213 15:50:29.811925 1490314 network_create.go:289] output of [docker network inspect cert-expiration-652483]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-652483 not found
	
	** /stderr **
	I1213 15:50:29.812025 1490314 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 15:50:29.830122 1490314 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2ccf9a9eb2c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:9e:64:ed:f4:f2} reservation:<nil>}
	I1213 15:50:29.830435 1490314 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f2add06a95dc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:3f:19:f0:2f:b1} reservation:<nil>}
	I1213 15:50:29.830934 1490314 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8517ffe6861d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:d6:d4:51:8b:3d} reservation:<nil>}
	I1213 15:50:29.831281 1490314 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-176044c8d04c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0e:5d:f1:c0:7b:26} reservation:<nil>}
	I1213 15:50:29.831997 1490314 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c3060}
	I1213 15:50:29.832016 1490314 network_create.go:124] attempt to create docker network cert-expiration-652483 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 15:50:29.832083 1490314 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-652483 cert-expiration-652483
	I1213 15:50:29.893273 1490314 network_create.go:108] docker network cert-expiration-652483 192.168.85.0/24 created
	I1213 15:50:29.893298 1490314 kic.go:121] calculated static IP "192.168.85.2" for the "cert-expiration-652483" container
	I1213 15:50:29.893393 1490314 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 15:50:29.911353 1490314 cli_runner.go:164] Run: docker volume create cert-expiration-652483 --label name.minikube.sigs.k8s.io=cert-expiration-652483 --label created_by.minikube.sigs.k8s.io=true
	I1213 15:50:29.928864 1490314 oci.go:103] Successfully created a docker volume cert-expiration-652483
	I1213 15:50:29.928945 1490314 cli_runner.go:164] Run: docker run --rm --name cert-expiration-652483-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-652483 --entrypoint /usr/bin/test -v cert-expiration-652483:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 15:50:30.579825 1490314 oci.go:107] Successfully prepared a docker volume cert-expiration-652483
	I1213 15:50:30.579877 1490314 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 15:50:30.579896 1490314 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 15:50:30.579967 1490314 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-652483:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 15:50:34.560778 1490314 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-652483:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.980759338s)
	I1213 15:50:34.560799 1490314 kic.go:203] duration metric: took 3.980911893s to extract preloaded images to volume ...
	W1213 15:50:34.560940 1490314 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 15:50:34.561048 1490314 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 15:50:34.646221 1490314 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-652483 --name cert-expiration-652483 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-652483 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-652483 --network cert-expiration-652483 --ip 192.168.85.2 --volume cert-expiration-652483:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 15:50:34.952340 1490314 cli_runner.go:164] Run: docker container inspect cert-expiration-652483 --format={{.State.Running}}
	I1213 15:50:34.974979 1490314 cli_runner.go:164] Run: docker container inspect cert-expiration-652483 --format={{.State.Status}}
	I1213 15:50:35.000393 1490314 cli_runner.go:164] Run: docker exec cert-expiration-652483 stat /var/lib/dpkg/alternatives/iptables
	I1213 15:50:35.057551 1490314 oci.go:144] the created container "cert-expiration-652483" has a running status.
	I1213 15:50:35.057572 1490314 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/cert-expiration-652483/id_rsa...
	I1213 15:50:35.305849 1490314 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/cert-expiration-652483/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 15:50:35.334719 1490314 cli_runner.go:164] Run: docker container inspect cert-expiration-652483 --format={{.State.Status}}
	I1213 15:50:35.364866 1490314 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 15:50:35.364877 1490314 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-652483 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 15:50:35.440229 1490314 cli_runner.go:164] Run: docker container inspect cert-expiration-652483 --format={{.State.Status}}
	I1213 15:50:35.478156 1490314 machine.go:94] provisionDockerMachine start ...
	I1213 15:50:35.478248 1490314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-652483
	I1213 15:50:35.509039 1490314 main.go:143] libmachine: Using SSH client type: native
	I1213 15:50:35.509368 1490314 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34178 <nil> <nil>}
	I1213 15:50:35.509375 1490314 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 15:50:35.510153 1490314 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49070->127.0.0.1:34178: read: connection reset by peer
	I1213 15:50:38.659019 1490314 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-652483
	
	I1213 15:50:38.659034 1490314 ubuntu.go:182] provisioning hostname "cert-expiration-652483"
	I1213 15:50:38.659096 1490314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-652483
	I1213 15:50:38.676026 1490314 main.go:143] libmachine: Using SSH client type: native
	I1213 15:50:38.676362 1490314 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34178 <nil> <nil>}
	I1213 15:50:38.676371 1490314 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-652483 && echo "cert-expiration-652483" | sudo tee /etc/hostname
	I1213 15:50:38.837095 1490314 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-652483
	
	I1213 15:50:38.837178 1490314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-652483
	I1213 15:50:38.855580 1490314 main.go:143] libmachine: Using SSH client type: native
	I1213 15:50:38.855889 1490314 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34178 <nil> <nil>}
	I1213 15:50:38.855903 1490314 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-652483' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-652483/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-652483' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 15:50:39.010021 1490314 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 15:50:39.010037 1490314 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 15:50:39.010078 1490314 ubuntu.go:190] setting up certificates
	I1213 15:50:39.010099 1490314 provision.go:84] configureAuth start
	I1213 15:50:39.010176 1490314 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-652483
	I1213 15:50:39.027774 1490314 provision.go:143] copyHostCerts
	I1213 15:50:39.027834 1490314 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 15:50:39.027841 1490314 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 15:50:39.027915 1490314 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 15:50:39.028004 1490314 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 15:50:39.028008 1490314 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 15:50:39.028032 1490314 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 15:50:39.028080 1490314 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 15:50:39.028083 1490314 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 15:50:39.028104 1490314 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 15:50:39.028147 1490314 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-652483 san=[127.0.0.1 192.168.85.2 cert-expiration-652483 localhost minikube]
	I1213 15:50:39.445239 1490314 provision.go:177] copyRemoteCerts
	I1213 15:50:39.445295 1490314 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 15:50:39.445346 1490314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-652483
	I1213 15:50:39.462512 1490314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34178 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/cert-expiration-652483/id_rsa Username:docker}
	I1213 15:50:39.571396 1490314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 15:50:39.589275 1490314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1213 15:50:39.608390 1490314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 15:50:39.626269 1490314 provision.go:87] duration metric: took 616.148948ms to configureAuth
	I1213 15:50:39.626288 1490314 ubuntu.go:206] setting minikube options for container-runtime
	I1213 15:50:39.626484 1490314 config.go:182] Loaded profile config "cert-expiration-652483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 15:50:39.626489 1490314 machine.go:97] duration metric: took 4.148323224s to provisionDockerMachine
	I1213 15:50:39.626500 1490314 client.go:176] duration metric: took 9.848132457s to LocalClient.Create
	I1213 15:50:39.626525 1490314 start.go:167] duration metric: took 9.848204381s to libmachine.API.Create "cert-expiration-652483"
	I1213 15:50:39.626532 1490314 start.go:293] postStartSetup for "cert-expiration-652483" (driver="docker")
	I1213 15:50:39.626540 1490314 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 15:50:39.626588 1490314 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 15:50:39.626627 1490314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-652483
	I1213 15:50:39.646715 1490314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34178 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/cert-expiration-652483/id_rsa Username:docker}
	I1213 15:50:39.751828 1490314 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 15:50:39.755374 1490314 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 15:50:39.755394 1490314 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 15:50:39.755405 1490314 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 15:50:39.755460 1490314 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 15:50:39.755539 1490314 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 15:50:39.755640 1490314 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 15:50:39.763298 1490314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 15:50:39.781976 1490314 start.go:296] duration metric: took 155.429536ms for postStartSetup
	I1213 15:50:39.782363 1490314 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-652483
	I1213 15:50:39.801572 1490314 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/config.json ...
	I1213 15:50:39.801849 1490314 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 15:50:39.801898 1490314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-652483
	I1213 15:50:39.819640 1490314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34178 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/cert-expiration-652483/id_rsa Username:docker}
	I1213 15:50:39.924822 1490314 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 15:50:39.929775 1490314 start.go:128] duration metric: took 10.155109611s to createHost
	I1213 15:50:39.929791 1490314 start.go:83] releasing machines lock for "cert-expiration-652483", held for 10.155224587s
	I1213 15:50:39.929870 1490314 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-652483
	I1213 15:50:39.947087 1490314 ssh_runner.go:195] Run: cat /version.json
	I1213 15:50:39.947103 1490314 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 15:50:39.947137 1490314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-652483
	I1213 15:50:39.947161 1490314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-652483
	I1213 15:50:39.969630 1490314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34178 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/cert-expiration-652483/id_rsa Username:docker}
	I1213 15:50:39.975489 1490314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34178 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/cert-expiration-652483/id_rsa Username:docker}
	I1213 15:50:40.178070 1490314 ssh_runner.go:195] Run: systemctl --version
	I1213 15:50:40.185452 1490314 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 15:50:40.190712 1490314 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 15:50:40.190785 1490314 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 15:50:40.219889 1490314 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 15:50:40.219914 1490314 start.go:496] detecting cgroup driver to use...
	I1213 15:50:40.219948 1490314 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 15:50:40.220017 1490314 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 15:50:40.236005 1490314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 15:50:40.249736 1490314 docker.go:218] disabling cri-docker service (if available) ...
	I1213 15:50:40.249793 1490314 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 15:50:40.267462 1490314 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 15:50:40.286145 1490314 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 15:50:40.407551 1490314 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 15:50:40.524424 1490314 docker.go:234] disabling docker service ...
	I1213 15:50:40.524486 1490314 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 15:50:40.547585 1490314 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 15:50:40.560978 1490314 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 15:50:40.704966 1490314 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 15:50:40.841398 1490314 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 15:50:40.855925 1490314 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 15:50:40.871380 1490314 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 15:50:40.881382 1490314 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 15:50:40.892668 1490314 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 15:50:40.892752 1490314 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 15:50:40.902058 1490314 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 15:50:40.910959 1490314 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 15:50:40.919854 1490314 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 15:50:40.929187 1490314 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 15:50:40.937308 1490314 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 15:50:40.948567 1490314 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 15:50:40.959299 1490314 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 15:50:40.968892 1490314 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 15:50:40.976781 1490314 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 15:50:40.984835 1490314 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 15:50:41.105925 1490314 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 15:50:41.222467 1490314 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 15:50:41.222535 1490314 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 15:50:41.226425 1490314 start.go:564] Will wait 60s for crictl version
	I1213 15:50:41.226480 1490314 ssh_runner.go:195] Run: which crictl
	I1213 15:50:41.229974 1490314 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 15:50:41.253708 1490314 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 15:50:41.253784 1490314 ssh_runner.go:195] Run: containerd --version
	I1213 15:50:41.274352 1490314 ssh_runner.go:195] Run: containerd --version
	I1213 15:50:41.305918 1490314 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1213 15:50:41.308830 1490314 cli_runner.go:164] Run: docker network inspect cert-expiration-652483 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 15:50:41.325150 1490314 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 15:50:41.329215 1490314 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 15:50:41.343935 1490314 kubeadm.go:884] updating cluster {Name:cert-expiration-652483 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-652483 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 15:50:41.344042 1490314 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 15:50:41.344111 1490314 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 15:50:41.380447 1490314 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 15:50:41.380459 1490314 containerd.go:534] Images already preloaded, skipping extraction
	I1213 15:50:41.380521 1490314 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 15:50:41.408207 1490314 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 15:50:41.408220 1490314 cache_images.go:86] Images are preloaded, skipping loading
	I1213 15:50:41.408227 1490314 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 containerd true true} ...
	I1213 15:50:41.408333 1490314 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-652483 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-652483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 15:50:41.408398 1490314 ssh_runner.go:195] Run: sudo crictl info
	I1213 15:50:41.437947 1490314 cni.go:84] Creating CNI manager for ""
	I1213 15:50:41.437958 1490314 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 15:50:41.437974 1490314 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 15:50:41.437998 1490314 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-652483 NodeName:cert-expiration-652483 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 15:50:41.438117 1490314 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "cert-expiration-652483"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 15:50:41.438185 1490314 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 15:50:41.446401 1490314 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 15:50:41.446463 1490314 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 15:50:41.454320 1490314 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1213 15:50:41.468218 1490314 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 15:50:41.482197 1490314 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 15:50:41.496645 1490314 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 15:50:41.500572 1490314 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 15:50:41.511113 1490314 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 15:50:41.623540 1490314 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 15:50:41.639473 1490314 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483 for IP: 192.168.85.2
	I1213 15:50:41.639484 1490314 certs.go:195] generating shared ca certs ...
	I1213 15:50:41.639498 1490314 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 15:50:41.639650 1490314 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 15:50:41.639692 1490314 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 15:50:41.639698 1490314 certs.go:257] generating profile certs ...
	I1213 15:50:41.639752 1490314 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/client.key
	I1213 15:50:41.639762 1490314 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/client.crt with IP's: []
	I1213 15:50:41.859100 1490314 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/client.crt ...
	I1213 15:50:41.859117 1490314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/client.crt: {Name:mkaf0506cc686ddd155ceb4b23360d6cc714bab3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 15:50:41.859334 1490314 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/client.key ...
	I1213 15:50:41.859343 1490314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/client.key: {Name:mkec3d8779a2cca81866569a97f075638e547171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 15:50:41.859439 1490314 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/apiserver.key.4e33d50e
	I1213 15:50:41.859451 1490314 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/apiserver.crt.4e33d50e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 15:50:42.059466 1490314 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/apiserver.crt.4e33d50e ...
	I1213 15:50:42.059481 1490314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/apiserver.crt.4e33d50e: {Name:mkaac433b754f23cf75889c852cbbe3432ad2a91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 15:50:42.059679 1490314 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/apiserver.key.4e33d50e ...
	I1213 15:50:42.059688 1490314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/apiserver.key.4e33d50e: {Name:mk13698a45c39c956e8724ed26547c59afea6685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 15:50:42.059772 1490314 certs.go:382] copying /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/apiserver.crt.4e33d50e -> /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/apiserver.crt
	I1213 15:50:42.059846 1490314 certs.go:386] copying /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/apiserver.key.4e33d50e -> /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/apiserver.key
	I1213 15:50:42.059898 1490314 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/proxy-client.key
	I1213 15:50:42.059917 1490314 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/proxy-client.crt with IP's: []
	I1213 15:50:42.431042 1490314 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/proxy-client.crt ...
	I1213 15:50:42.431058 1490314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/proxy-client.crt: {Name:mkd01b7ed5e37108db0984bdf159af6bf68c9a7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 15:50:42.431251 1490314 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/proxy-client.key ...
	I1213 15:50:42.431259 1490314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/proxy-client.key: {Name:mk30ec2ceec4a1d96524058a53b2bced3a8d8e3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 15:50:42.431464 1490314 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 15:50:42.431502 1490314 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 15:50:42.431510 1490314 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 15:50:42.431540 1490314 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 15:50:42.431562 1490314 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 15:50:42.431589 1490314 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 15:50:42.431630 1490314 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 15:50:42.432244 1490314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 15:50:42.450682 1490314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 15:50:42.469654 1490314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 15:50:42.487508 1490314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 15:50:42.505136 1490314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1213 15:50:42.522937 1490314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 15:50:42.541145 1490314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 15:50:42.559097 1490314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/cert-expiration-652483/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 15:50:42.577103 1490314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 15:50:42.595030 1490314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 15:50:42.614309 1490314 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 15:50:42.631217 1490314 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 15:50:42.643967 1490314 ssh_runner.go:195] Run: openssl version
	I1213 15:50:42.649948 1490314 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 15:50:42.657090 1490314 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 15:50:42.664726 1490314 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 15:50:42.668464 1490314 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 15:50:42.668519 1490314 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 15:50:42.709569 1490314 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 15:50:42.717171 1490314 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12529342.pem /etc/ssl/certs/3ec20f2e.0
	I1213 15:50:42.724600 1490314 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 15:50:42.732160 1490314 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 15:50:42.739751 1490314 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 15:50:42.743403 1490314 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 15:50:42.743460 1490314 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 15:50:42.785415 1490314 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 15:50:42.792869 1490314 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 15:50:42.800328 1490314 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 15:50:42.807980 1490314 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 15:50:42.815573 1490314 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 15:50:42.819285 1490314 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 15:50:42.819371 1490314 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 15:50:42.862158 1490314 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 15:50:42.870352 1490314 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1252934.pem /etc/ssl/certs/51391683.0
	I1213 15:50:42.878805 1490314 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 15:50:42.883732 1490314 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 15:50:42.883779 1490314 kubeadm.go:401] StartCluster: {Name:cert-expiration-652483 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-652483 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 15:50:42.883844 1490314 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 15:50:42.883918 1490314 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 15:50:42.912157 1490314 cri.go:89] found id: ""
	I1213 15:50:42.912234 1490314 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 15:50:42.920052 1490314 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 15:50:42.927798 1490314 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 15:50:42.927854 1490314 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 15:50:42.936114 1490314 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 15:50:42.936132 1490314 kubeadm.go:158] found existing configuration files:
	
	I1213 15:50:42.936197 1490314 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 15:50:42.944076 1490314 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 15:50:42.944133 1490314 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 15:50:42.951786 1490314 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 15:50:42.959723 1490314 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 15:50:42.959787 1490314 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 15:50:42.967135 1490314 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 15:50:42.974793 1490314 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 15:50:42.974848 1490314 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 15:50:42.982274 1490314 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 15:50:42.989932 1490314 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 15:50:42.989987 1490314 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 15:50:42.997470 1490314 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 15:50:43.039801 1490314 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 15:50:43.039851 1490314 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 15:50:43.066303 1490314 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 15:50:43.066369 1490314 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 15:50:43.066411 1490314 kubeadm.go:319] OS: Linux
	I1213 15:50:43.066455 1490314 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 15:50:43.066502 1490314 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 15:50:43.066548 1490314 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 15:50:43.066595 1490314 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 15:50:43.066641 1490314 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 15:50:43.066690 1490314 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 15:50:43.066740 1490314 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 15:50:43.066787 1490314 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 15:50:43.066831 1490314 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 15:50:43.151957 1490314 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 15:50:43.152082 1490314 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 15:50:43.152171 1490314 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 15:50:43.158372 1490314 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 15:50:43.165245 1490314 out.go:252]   - Generating certificates and keys ...
	I1213 15:50:43.165346 1490314 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 15:50:43.165423 1490314 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 15:50:43.321567 1490314 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 15:50:43.751937 1490314 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 15:50:44.291944 1490314 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 15:50:44.813950 1490314 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 15:50:45.074160 1490314 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 15:50:45.074488 1490314 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-652483 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 15:50:46.151024 1490314 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 15:50:46.151353 1490314 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-652483 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 15:50:46.930492 1490314 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 15:50:47.107549 1490314 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 15:50:47.213143 1490314 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 15:50:47.213366 1490314 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 15:50:47.521598 1490314 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 15:50:48.059498 1490314 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 15:50:48.552979 1490314 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 15:50:48.663068 1490314 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 15:50:49.005170 1490314 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 15:50:49.007760 1490314 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 15:50:49.011496 1490314 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 15:50:49.017354 1490314 out.go:252]   - Booting up control plane ...
	I1213 15:50:49.017477 1490314 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 15:50:49.017559 1490314 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 15:50:49.017622 1490314 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 15:50:49.041842 1490314 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 15:50:49.041944 1490314 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 15:50:49.049976 1490314 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 15:50:49.050301 1490314 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 15:50:49.050540 1490314 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 15:50:49.187497 1490314 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 15:50:49.187610 1490314 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 15:50:50.191250 1490314 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001911566s
	I1213 15:50:50.194777 1490314 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 15:50:50.199080 1490314 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1213 15:50:50.199173 1490314 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 15:50:50.199247 1490314 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 15:50:53.287305 1490314 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.088453063s
	I1213 15:50:54.770746 1490314 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.572362135s
	I1213 15:50:56.700744 1490314 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502137999s
	I1213 15:50:56.732433 1490314 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 15:50:56.747734 1490314 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 15:50:56.763113 1490314 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 15:50:56.763305 1490314 kubeadm.go:319] [mark-control-plane] Marking the node cert-expiration-652483 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 15:50:56.779219 1490314 kubeadm.go:319] [bootstrap-token] Using token: qprkod.fe65i3jnk3hf46jg
	I1213 15:50:56.782253 1490314 out.go:252]   - Configuring RBAC rules ...
	I1213 15:50:56.782399 1490314 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 15:50:56.790614 1490314 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 15:50:56.801205 1490314 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 15:50:56.805596 1490314 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 15:50:56.811934 1490314 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 15:50:56.816434 1490314 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 15:50:57.107785 1490314 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 15:50:57.554090 1490314 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 15:50:58.108377 1490314 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 15:50:58.109846 1490314 kubeadm.go:319] 
	I1213 15:50:58.109910 1490314 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 15:50:58.109914 1490314 kubeadm.go:319] 
	I1213 15:50:58.109986 1490314 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 15:50:58.109989 1490314 kubeadm.go:319] 
	I1213 15:50:58.110011 1490314 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 15:50:58.110065 1490314 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 15:50:58.110112 1490314 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 15:50:58.110115 1490314 kubeadm.go:319] 
	I1213 15:50:58.110181 1490314 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 15:50:58.110185 1490314 kubeadm.go:319] 
	I1213 15:50:58.110228 1490314 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 15:50:58.110231 1490314 kubeadm.go:319] 
	I1213 15:50:58.110279 1490314 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 15:50:58.110348 1490314 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 15:50:58.110411 1490314 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 15:50:58.110414 1490314 kubeadm.go:319] 
	I1213 15:50:58.110492 1490314 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 15:50:58.110563 1490314 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 15:50:58.110566 1490314 kubeadm.go:319] 
	I1213 15:50:58.110648 1490314 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token qprkod.fe65i3jnk3hf46jg \
	I1213 15:50:58.110744 1490314 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:590ff7d5a34ba2f13bc4446ba280674514ec0440f2cd73335e75879dbf7fc61d \
	I1213 15:50:58.110762 1490314 kubeadm.go:319] 	--control-plane 
	I1213 15:50:58.110765 1490314 kubeadm.go:319] 
	I1213 15:50:58.110843 1490314 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 15:50:58.110845 1490314 kubeadm.go:319] 
	I1213 15:50:58.111281 1490314 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qprkod.fe65i3jnk3hf46jg \
	I1213 15:50:58.111416 1490314 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:590ff7d5a34ba2f13bc4446ba280674514ec0440f2cd73335e75879dbf7fc61d 
	I1213 15:50:58.116625 1490314 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 15:50:58.116844 1490314 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 15:50:58.116947 1490314 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 15:50:58.116962 1490314 cni.go:84] Creating CNI manager for ""
	I1213 15:50:58.116969 1490314 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 15:50:58.120235 1490314 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1213 15:50:58.123116 1490314 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 15:50:58.127505 1490314 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 15:50:58.127515 1490314 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1213 15:50:58.144898 1490314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 15:50:58.466901 1490314 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 15:50:58.467030 1490314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 15:50:58.467112 1490314 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-652483 minikube.k8s.io/updated_at=2025_12_13T15_50_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7 minikube.k8s.io/name=cert-expiration-652483 minikube.k8s.io/primary=true
	I1213 15:50:58.670690 1490314 ops.go:34] apiserver oom_adj: -16
	I1213 15:50:58.670706 1490314 kubeadm.go:1114] duration metric: took 203.728373ms to wait for elevateKubeSystemPrivileges
	I1213 15:50:58.670718 1490314 kubeadm.go:403] duration metric: took 15.786946253s to StartCluster
	I1213 15:50:58.670733 1490314 settings.go:142] acquiring lock: {Name:mk3e38eb9635dd950ddf8081cc0598a8c10049c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 15:50:58.670798 1490314 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 15:50:58.671720 1490314 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 15:50:58.671923 1490314 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 15:50:58.672002 1490314 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 15:50:58.672244 1490314 config.go:182] Loaded profile config "cert-expiration-652483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 15:50:58.672287 1490314 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 15:50:58.672341 1490314 addons.go:70] Setting storage-provisioner=true in profile "cert-expiration-652483"
	I1213 15:50:58.672349 1490314 addons.go:70] Setting default-storageclass=true in profile "cert-expiration-652483"
	I1213 15:50:58.672355 1490314 addons.go:239] Setting addon storage-provisioner=true in "cert-expiration-652483"
	I1213 15:50:58.672366 1490314 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-652483"
	I1213 15:50:58.672374 1490314 host.go:66] Checking if "cert-expiration-652483" exists ...
	I1213 15:50:58.672697 1490314 cli_runner.go:164] Run: docker container inspect cert-expiration-652483 --format={{.State.Status}}
	I1213 15:50:58.673024 1490314 cli_runner.go:164] Run: docker container inspect cert-expiration-652483 --format={{.State.Status}}
	I1213 15:50:58.676185 1490314 out.go:179] * Verifying Kubernetes components...
	I1213 15:50:58.680285 1490314 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 15:50:58.706424 1490314 addons.go:239] Setting addon default-storageclass=true in "cert-expiration-652483"
	I1213 15:50:58.706454 1490314 host.go:66] Checking if "cert-expiration-652483" exists ...
	I1213 15:50:58.706905 1490314 cli_runner.go:164] Run: docker container inspect cert-expiration-652483 --format={{.State.Status}}
	I1213 15:50:58.727768 1490314 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 15:50:58.731347 1490314 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 15:50:58.731360 1490314 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 15:50:58.731433 1490314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-652483
	I1213 15:50:58.751667 1490314 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 15:50:58.751681 1490314 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 15:50:58.751744 1490314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-652483
	I1213 15:50:58.773356 1490314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34178 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/cert-expiration-652483/id_rsa Username:docker}
	I1213 15:50:58.791370 1490314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34178 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/cert-expiration-652483/id_rsa Username:docker}
	I1213 15:50:58.982773 1490314 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 15:50:58.982951 1490314 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 15:50:59.016761 1490314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 15:50:59.025928 1490314 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 15:50:59.450260 1490314 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1213 15:50:59.452202 1490314 api_server.go:52] waiting for apiserver process to appear ...
	I1213 15:50:59.452252 1490314 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:50:59.647625 1490314 api_server.go:72] duration metric: took 975.664337ms to wait for apiserver process to appear ...
	I1213 15:50:59.647638 1490314 api_server.go:88] waiting for apiserver healthz status ...
	I1213 15:50:59.647659 1490314 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 15:50:59.658447 1490314 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1213 15:50:59.660954 1490314 api_server.go:141] control plane version: v1.34.2
	I1213 15:50:59.660972 1490314 api_server.go:131] duration metric: took 13.328798ms to wait for apiserver health ...
	I1213 15:50:59.660980 1490314 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 15:50:59.664586 1490314 system_pods.go:59] 5 kube-system pods found
	I1213 15:50:59.664612 1490314 system_pods.go:61] "etcd-cert-expiration-652483" [fdd2da0d-8db0-4a46-b3dd-29dc5232b80e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 15:50:59.664620 1490314 system_pods.go:61] "kube-apiserver-cert-expiration-652483" [cd2f2c93-3b5b-4039-970b-8020868fc830] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 15:50:59.664628 1490314 system_pods.go:61] "kube-controller-manager-cert-expiration-652483" [d6ad5fcd-6bf4-43db-bdbb-db572e594899] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 15:50:59.664633 1490314 system_pods.go:61] "kube-scheduler-cert-expiration-652483" [a63f6bea-3755-435b-9b53-30e0c168babc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 15:50:59.664636 1490314 system_pods.go:61] "storage-provisioner" [dc7f17e6-37ef-4194-b338-975eee512286] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 15:50:59.664641 1490314 system_pods.go:74] duration metric: took 3.656446ms to wait for pod list to return data ...
	I1213 15:50:59.664651 1490314 kubeadm.go:587] duration metric: took 992.708632ms to wait for: map[apiserver:true system_pods:true]
	I1213 15:50:59.664661 1490314 node_conditions.go:102] verifying NodePressure condition ...
	I1213 15:50:59.666131 1490314 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1213 15:50:59.667701 1490314 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 15:50:59.667720 1490314 node_conditions.go:123] node cpu capacity is 2
	I1213 15:50:59.667731 1490314 node_conditions.go:105] duration metric: took 3.065715ms to run NodePressure ...
	I1213 15:50:59.667742 1490314 start.go:242] waiting for startup goroutines ...
	I1213 15:50:59.669110 1490314 addons.go:530] duration metric: took 996.813617ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1213 15:50:59.954943 1490314 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-652483" context rescaled to 1 replicas
	I1213 15:50:59.954971 1490314 start.go:247] waiting for cluster config update ...
	I1213 15:50:59.954982 1490314 start.go:256] writing updated cluster config ...
	I1213 15:50:59.955281 1490314 ssh_runner.go:195] Run: rm -f paused
	I1213 15:51:00.142688 1490314 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1213 15:51:00.146513 1490314 out.go:179] * Done! kubectl is now configured to use "cert-expiration-652483" cluster and "default" namespace by default
	I1213 15:53:06.096078 1450159 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001945602s
	I1213 15:53:06.097403 1450159 kubeadm.go:319] 
	I1213 15:53:06.097525 1450159 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 15:53:06.097577 1450159 kubeadm.go:319] 	- The kubelet is not running
	I1213 15:53:06.097681 1450159 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 15:53:06.097687 1450159 kubeadm.go:319] 
	I1213 15:53:06.097791 1450159 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 15:53:06.097822 1450159 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 15:53:06.097853 1450159 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 15:53:06.097857 1450159 kubeadm.go:319] 
	I1213 15:53:06.102245 1450159 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 15:53:06.102671 1450159 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 15:53:06.102788 1450159 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 15:53:06.103027 1450159 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 15:53:06.103037 1450159 kubeadm.go:319] 
	I1213 15:53:06.103108 1450159 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 15:53:06.103171 1450159 kubeadm.go:403] duration metric: took 12m18.665565751s to StartCluster
	I1213 15:53:06.103211 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 15:53:06.103299 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 15:53:06.130642 1450159 cri.go:89] found id: ""
	I1213 15:53:06.130664 1450159 logs.go:282] 0 containers: []
	W1213 15:53:06.130672 1450159 logs.go:284] No container was found matching "kube-apiserver"
	I1213 15:53:06.130678 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 15:53:06.130757 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 15:53:06.155752 1450159 cri.go:89] found id: ""
	I1213 15:53:06.155777 1450159 logs.go:282] 0 containers: []
	W1213 15:53:06.155786 1450159 logs.go:284] No container was found matching "etcd"
	I1213 15:53:06.155792 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 15:53:06.155875 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 15:53:06.180803 1450159 cri.go:89] found id: ""
	I1213 15:53:06.180828 1450159 logs.go:282] 0 containers: []
	W1213 15:53:06.180837 1450159 logs.go:284] No container was found matching "coredns"
	I1213 15:53:06.180847 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 15:53:06.180907 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 15:53:06.206566 1450159 cri.go:89] found id: ""
	I1213 15:53:06.206594 1450159 logs.go:282] 0 containers: []
	W1213 15:53:06.206603 1450159 logs.go:284] No container was found matching "kube-scheduler"
	I1213 15:53:06.206609 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 15:53:06.206695 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 15:53:06.231426 1450159 cri.go:89] found id: ""
	I1213 15:53:06.231458 1450159 logs.go:282] 0 containers: []
	W1213 15:53:06.231468 1450159 logs.go:284] No container was found matching "kube-proxy"
	I1213 15:53:06.231474 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 15:53:06.231566 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 15:53:06.256941 1450159 cri.go:89] found id: ""
	I1213 15:53:06.256964 1450159 logs.go:282] 0 containers: []
	W1213 15:53:06.256972 1450159 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 15:53:06.256979 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 15:53:06.257041 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 15:53:06.281418 1450159 cri.go:89] found id: ""
	I1213 15:53:06.281442 1450159 logs.go:282] 0 containers: []
	W1213 15:53:06.281451 1450159 logs.go:284] No container was found matching "kindnet"
	I1213 15:53:06.281457 1450159 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 15:53:06.281516 1450159 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 15:53:06.307575 1450159 cri.go:89] found id: ""
	I1213 15:53:06.307602 1450159 logs.go:282] 0 containers: []
	W1213 15:53:06.307610 1450159 logs.go:284] No container was found matching "storage-provisioner"
	I1213 15:53:06.307621 1450159 logs.go:123] Gathering logs for kubelet ...
	I1213 15:53:06.307635 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 15:53:06.367685 1450159 logs.go:123] Gathering logs for dmesg ...
	I1213 15:53:06.367771 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 15:53:06.393106 1450159 logs.go:123] Gathering logs for describe nodes ...
	I1213 15:53:06.393134 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 15:53:06.456236 1450159 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 15:53:06.456308 1450159 logs.go:123] Gathering logs for containerd ...
	I1213 15:53:06.456334 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 15:53:06.502009 1450159 logs.go:123] Gathering logs for container status ...
	I1213 15:53:06.502045 1450159 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 15:53:06.531909 1450159 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001945602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 15:53:06.531954 1450159 out.go:285] * 
	W1213 15:53:06.532033 1450159 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001945602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 15:53:06.532220 1450159 out.go:285] * 
	W1213 15:53:06.534619 1450159 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 15:53:06.541372 1450159 out.go:203] 
	W1213 15:53:06.544133 1450159 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001945602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 15:53:06.544189 1450159 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 15:53:06.544223 1450159 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 15:53:06.547500 1450159 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 15:44:58 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:44:58.807547040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:44:58 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:44:58.808796832Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.290672148s"
	Dec 13 15:44:58 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:44:58.808851952Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\""
	Dec 13 15:44:58 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:44:58.810584546Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\""
	Dec 13 15:44:59 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:44:59.456075144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 13 15:44:59 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:44:59.458262751Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709"
	Dec 13 15:44:59 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:44:59.460349765Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 13 15:44:59 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:44:59.464621604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 13 15:44:59 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:44:59.465550394Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 654.924627ms"
	Dec 13 15:44:59 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:44:59.465679630Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\""
	Dec 13 15:44:59 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:44:59.466572376Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\""
	Dec 13 15:45:02 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:45:02.240527841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:45:02 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:45:02.243129608Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21140371"
	Dec 13 15:45:02 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:45:02.245259197Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:45:02 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:45:02.251034434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:45:02 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:45:02.253435935Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 2.786813362s"
	Dec 13 15:45:02 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:45:02.253495569Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\""
	Dec 13 15:49:51 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:49:51.871622859Z" level=info msg="container event discarded" container=bb377c8bcedcac1973aae8a68ce60a279f19172b9f3f670a511b3566fabfaca6 type=CONTAINER_DELETED_EVENT
	Dec 13 15:49:51 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:49:51.886915507Z" level=info msg="container event discarded" container=4ccf3c0c89c16e0193fe51c39eaab8a84a4a142bbe501b9313b1eeef732232e9 type=CONTAINER_DELETED_EVENT
	Dec 13 15:49:51 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:49:51.901159343Z" level=info msg="container event discarded" container=85d24f57d12c5a4191091dc06278fa919a98c4aa1a53ae4c5d22d47d9d4feeba type=CONTAINER_DELETED_EVENT
	Dec 13 15:49:51 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:49:51.901212511Z" level=info msg="container event discarded" container=9509e5dfb9003d8b5c034a07be6a7b5caa6b3ead49d69edbd5578b0551281d90 type=CONTAINER_DELETED_EVENT
	Dec 13 15:49:51 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:49:51.919417614Z" level=info msg="container event discarded" container=45872315054a54e9911482e62bd21c205710ee0beec723806877528e2089ff07 type=CONTAINER_DELETED_EVENT
	Dec 13 15:49:51 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:49:51.919477001Z" level=info msg="container event discarded" container=cf76b2c104a304d7e19f24a89c965d3e575e8ee844097310a6c2551f5f079030 type=CONTAINER_DELETED_EVENT
	Dec 13 15:49:51 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:49:51.935692909Z" level=info msg="container event discarded" container=d82b78ac5aacc169f71f91419b16fe648a93f2212bf117f6e49fcd79eabd1b6b type=CONTAINER_DELETED_EVENT
	Dec 13 15:49:51 kubernetes-upgrade-098313 containerd[555]: time="2025-12-13T15:49:51.935751156Z" level=info msg="container event discarded" container=0b69e15bc318bb9feda4b91b780e13b264eb9053adbd48813fadbca180bfac7e type=CONTAINER_DELETED_EVENT
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.759368] overlayfs: idmapped layers are currently not supported
	[Dec13 13:37] overlayfs: idmapped layers are currently not supported
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 15:53:08 up  7:35,  0 user,  load average: 0.64, 1.43, 1.69
	Linux kubernetes-upgrade-098313 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 15:53:04 kubernetes-upgrade-098313 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:53:05 kubernetes-upgrade-098313 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 13 15:53:05 kubernetes-upgrade-098313 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:53:05 kubernetes-upgrade-098313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:53:05 kubernetes-upgrade-098313 kubelet[14142]: E1213 15:53:05.634148   14142 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:53:05 kubernetes-upgrade-098313 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:53:05 kubernetes-upgrade-098313 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:53:06 kubernetes-upgrade-098313 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 13 15:53:06 kubernetes-upgrade-098313 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:53:06 kubernetes-upgrade-098313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:53:06 kubernetes-upgrade-098313 kubelet[14212]: E1213 15:53:06.405960   14212 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:53:06 kubernetes-upgrade-098313 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:53:06 kubernetes-upgrade-098313 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:53:07 kubernetes-upgrade-098313 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 13 15:53:07 kubernetes-upgrade-098313 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:53:07 kubernetes-upgrade-098313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:53:07 kubernetes-upgrade-098313 kubelet[14240]: E1213 15:53:07.179262   14240 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:53:07 kubernetes-upgrade-098313 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:53:07 kubernetes-upgrade-098313 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 15:53:07 kubernetes-upgrade-098313 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 13 15:53:07 kubernetes-upgrade-098313 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:53:07 kubernetes-upgrade-098313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 15:53:07 kubernetes-upgrade-098313 kubelet[14260]: E1213 15:53:07.906214   14260 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 15:53:07 kubernetes-upgrade-098313 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 15:53:07 kubernetes-upgrade-098313 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-098313 -n kubernetes-upgrade-098313
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-098313 -n kubernetes-upgrade-098313: exit status 2 (360.275722ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-098313" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-098313" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-098313
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-098313: (2.286004447s)
--- FAIL: TestKubernetesUpgrade (798.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (512.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-439544 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-439544 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m31.206259607s)

                                                
                                                
-- stdout --
	* [no-preload-439544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "no-preload-439544" primary control-plane node in "no-preload-439544" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 15:54:10.849627 1500765 out.go:360] Setting OutFile to fd 1 ...
	I1213 15:54:10.849771 1500765 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:54:10.849777 1500765 out.go:374] Setting ErrFile to fd 2...
	I1213 15:54:10.849782 1500765 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:54:10.850176 1500765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 15:54:10.850683 1500765 out.go:368] Setting JSON to false
	I1213 15:54:10.851696 1500765 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":27400,"bootTime":1765613851,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 15:54:10.851808 1500765 start.go:143] virtualization:  
	I1213 15:54:10.859459 1500765 out.go:179] * [no-preload-439544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 15:54:10.863957 1500765 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 15:54:10.864008 1500765 notify.go:221] Checking for updates...
	I1213 15:54:10.870780 1500765 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 15:54:10.873873 1500765 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 15:54:10.876905 1500765 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 15:54:10.879992 1500765 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 15:54:10.883141 1500765 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 15:54:10.886940 1500765 config.go:182] Loaded profile config "old-k8s-version-912710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1213 15:54:10.887047 1500765 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 15:54:10.932486 1500765 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 15:54:10.932616 1500765 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 15:54:11.017168 1500765 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 15:54:11.006894935 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 15:54:11.017279 1500765 docker.go:319] overlay module found
	I1213 15:54:11.020567 1500765 out.go:179] * Using the docker driver based on user configuration
	I1213 15:54:11.023469 1500765 start.go:309] selected driver: docker
	I1213 15:54:11.023488 1500765 start.go:927] validating driver "docker" against <nil>
	I1213 15:54:11.023507 1500765 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 15:54:11.024193 1500765 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 15:54:11.100771 1500765 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 15:54:11.091186272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 15:54:11.100942 1500765 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 15:54:11.101178 1500765 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 15:54:11.105150 1500765 out.go:179] * Using Docker driver with root privileges
	I1213 15:54:11.108088 1500765 cni.go:84] Creating CNI manager for ""
	I1213 15:54:11.108162 1500765 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 15:54:11.108171 1500765 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 15:54:11.108263 1500765 start.go:353] cluster config:
	{Name:no-preload-439544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 15:54:11.111421 1500765 out.go:179] * Starting "no-preload-439544" primary control-plane node in "no-preload-439544" cluster
	I1213 15:54:11.114186 1500765 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 15:54:11.117260 1500765 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 15:54:11.120102 1500765 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 15:54:11.120269 1500765 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/config.json ...
	I1213 15:54:11.120316 1500765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/config.json: {Name:mke69a8b0b6af95eb65dde119c7d3a17a3ec5cee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 15:54:11.120533 1500765 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 15:54:11.120702 1500765 cache.go:107] acquiring lock: {Name:mk6458bc7297def26ffc87aa852ed603976a017c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 15:54:11.120802 1500765 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1213 15:54:11.120811 1500765 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 118.406µs
	I1213 15:54:11.120823 1500765 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1213 15:54:11.120834 1500765 cache.go:107] acquiring lock: {Name:mk04216f72d0f7cd3d2308def830acac11c8b85d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 15:54:11.120908 1500765 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 15:54:11.121096 1500765 cache.go:107] acquiring lock: {Name:mk2054b1540f1c54f9b25f5f78ec681c8220cfcf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 15:54:11.121167 1500765 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 15:54:11.121263 1500765 cache.go:107] acquiring lock: {Name:mke9c9289e43b08c6e721f866225f618ba3afddf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 15:54:11.121327 1500765 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 15:54:11.121423 1500765 cache.go:107] acquiring lock: {Name:mkd9f47dfe476ebd2c352fdee514a99c9fba7295 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 15:54:11.121486 1500765 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 15:54:11.121578 1500765 cache.go:107] acquiring lock: {Name:mkecf0483a10d405cf273c97b7180611bb889c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 15:54:11.121622 1500765 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1213 15:54:11.121630 1500765 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 53.759µs
	I1213 15:54:11.121636 1500765 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1213 15:54:11.121646 1500765 cache.go:107] acquiring lock: {Name:mkb08190a177fa29b2e45167b12d4742acf808cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 15:54:11.121683 1500765 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1213 15:54:11.121688 1500765 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 42.994µs
	I1213 15:54:11.121694 1500765 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1213 15:54:11.121716 1500765 cache.go:107] acquiring lock: {Name:mk18c875751b02ce01ad21e18c1d2a3a9ed5d930 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 15:54:11.121783 1500765 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 15:54:11.122746 1500765 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 15:54:11.123471 1500765 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 15:54:11.124315 1500765 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 15:54:11.124815 1500765 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 15:54:11.127849 1500765 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 15:54:11.146934 1500765 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 15:54:11.146963 1500765 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 15:54:11.146980 1500765 cache.go:243] Successfully downloaded all kic artifacts
	I1213 15:54:11.147036 1500765 start.go:360] acquireMachinesLock for no-preload-439544: {Name:mk6eb67fc85c056d1917e38b306c3e4e0ae30393 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 15:54:11.147230 1500765 start.go:364] duration metric: took 132.026µs to acquireMachinesLock for "no-preload-439544"
	I1213 15:54:11.147301 1500765 start.go:93] Provisioning new machine with config: &{Name:no-preload-439544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 15:54:11.147439 1500765 start.go:125] createHost starting for "" (driver="docker")
	I1213 15:54:11.152962 1500765 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 15:54:11.153228 1500765 start.go:159] libmachine.API.Create for "no-preload-439544" (driver="docker")
	I1213 15:54:11.153255 1500765 client.go:173] LocalClient.Create starting
	I1213 15:54:11.153328 1500765 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem
	I1213 15:54:11.153370 1500765 main.go:143] libmachine: Decoding PEM data...
	I1213 15:54:11.153386 1500765 main.go:143] libmachine: Parsing certificate...
	I1213 15:54:11.153431 1500765 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem
	I1213 15:54:11.153451 1500765 main.go:143] libmachine: Decoding PEM data...
	I1213 15:54:11.153463 1500765 main.go:143] libmachine: Parsing certificate...
	I1213 15:54:11.153836 1500765 cli_runner.go:164] Run: docker network inspect no-preload-439544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 15:54:11.180830 1500765 cli_runner.go:211] docker network inspect no-preload-439544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 15:54:11.180930 1500765 network_create.go:284] running [docker network inspect no-preload-439544] to gather additional debugging logs...
	I1213 15:54:11.180947 1500765 cli_runner.go:164] Run: docker network inspect no-preload-439544
	W1213 15:54:11.201650 1500765 cli_runner.go:211] docker network inspect no-preload-439544 returned with exit code 1
	I1213 15:54:11.201677 1500765 network_create.go:287] error running [docker network inspect no-preload-439544]: docker network inspect no-preload-439544: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-439544 not found
	I1213 15:54:11.201691 1500765 network_create.go:289] output of [docker network inspect no-preload-439544]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-439544 not found
	
	** /stderr **
	I1213 15:54:11.201796 1500765 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 15:54:11.224700 1500765 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2ccf9a9eb2c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:9e:64:ed:f4:f2} reservation:<nil>}
	I1213 15:54:11.225012 1500765 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f2add06a95dc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:3f:19:f0:2f:b1} reservation:<nil>}
	I1213 15:54:11.225262 1500765 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8517ffe6861d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:d6:d4:51:8b:3d} reservation:<nil>}
	I1213 15:54:11.225557 1500765 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e6a021155172 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c6:49:6b:1e:97:8b} reservation:<nil>}
	I1213 15:54:11.225981 1500765 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bb5ca0}
	I1213 15:54:11.225998 1500765 network_create.go:124] attempt to create docker network no-preload-439544 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 15:54:11.226054 1500765 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-439544 no-preload-439544
	I1213 15:54:11.312874 1500765 network_create.go:108] docker network no-preload-439544 192.168.85.0/24 created
	I1213 15:54:11.312967 1500765 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-439544" container
	I1213 15:54:11.313072 1500765 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 15:54:11.339165 1500765 cli_runner.go:164] Run: docker volume create no-preload-439544 --label name.minikube.sigs.k8s.io=no-preload-439544 --label created_by.minikube.sigs.k8s.io=true
	I1213 15:54:11.356670 1500765 oci.go:103] Successfully created a docker volume no-preload-439544
	I1213 15:54:11.356763 1500765 cli_runner.go:164] Run: docker run --rm --name no-preload-439544-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-439544 --entrypoint /usr/bin/test -v no-preload-439544:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 15:54:11.475615 1500765 cache.go:162] opening:  /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1213 15:54:11.493018 1500765 cache.go:162] opening:  /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1213 15:54:11.525089 1500765 cache.go:162] opening:  /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1213 15:54:11.537955 1500765 cache.go:162] opening:  /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1213 15:54:11.546399 1500765 cache.go:162] opening:  /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1213 15:54:11.939475 1500765 cache.go:157] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1213 15:54:11.939505 1500765 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 818.080767ms
	I1213 15:54:11.939517 1500765 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1213 15:54:12.068764 1500765 oci.go:107] Successfully prepared a docker volume no-preload-439544
	I1213 15:54:12.069109 1500765 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1213 15:54:12.069303 1500765 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 15:54:12.069461 1500765 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 15:54:12.159175 1500765 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-439544 --name no-preload-439544 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-439544 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-439544 --network no-preload-439544 --ip 192.168.85.2 --volume no-preload-439544:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 15:54:12.451287 1500765 cache.go:157] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1213 15:54:12.451384 1500765 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.330287694s
	I1213 15:54:12.451413 1500765 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1213 15:54:12.474967 1500765 cache.go:157] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1213 15:54:12.475075 1500765 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.353810696s
	I1213 15:54:12.475353 1500765 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1213 15:54:12.509366 1500765 cache.go:157] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1213 15:54:12.509438 1500765 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.387720995s
	I1213 15:54:12.509470 1500765 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1213 15:54:12.584195 1500765 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Running}}
	I1213 15:54:12.619218 1500765 cache.go:157] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1213 15:54:12.619300 1500765 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.498464207s
	I1213 15:54:12.619338 1500765 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1213 15:54:12.619369 1500765 cache.go:87] Successfully saved all images to host disk.
	I1213 15:54:12.620816 1500765 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 15:54:12.649783 1500765 cli_runner.go:164] Run: docker exec no-preload-439544 stat /var/lib/dpkg/alternatives/iptables
	I1213 15:54:12.724249 1500765 oci.go:144] the created container "no-preload-439544" has a running status.
	I1213 15:54:12.724279 1500765 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa...
	I1213 15:54:13.351599 1500765 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 15:54:13.381666 1500765 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 15:54:13.417763 1500765 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 15:54:13.417783 1500765 kic_runner.go:114] Args: [docker exec --privileged no-preload-439544 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 15:54:13.513732 1500765 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 15:54:13.537837 1500765 machine.go:94] provisionDockerMachine start ...
	I1213 15:54:13.537944 1500765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 15:54:13.565841 1500765 main.go:143] libmachine: Using SSH client type: native
	I1213 15:54:13.566460 1500765 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34193 <nil> <nil>}
	I1213 15:54:13.566491 1500765 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 15:54:13.567189 1500765 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59354->127.0.0.1:34193: read: connection reset by peer
	I1213 15:54:16.751245 1500765 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-439544
	
	I1213 15:54:16.751339 1500765 ubuntu.go:182] provisioning hostname "no-preload-439544"
	I1213 15:54:16.751436 1500765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 15:54:16.792283 1500765 main.go:143] libmachine: Using SSH client type: native
	I1213 15:54:16.792606 1500765 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34193 <nil> <nil>}
	I1213 15:54:16.792619 1500765 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-439544 && echo "no-preload-439544" | sudo tee /etc/hostname
	I1213 15:54:17.001033 1500765 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-439544
	
	I1213 15:54:17.001214 1500765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 15:54:17.065007 1500765 main.go:143] libmachine: Using SSH client type: native
	I1213 15:54:17.065322 1500765 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34193 <nil> <nil>}
	I1213 15:54:17.065338 1500765 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-439544' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-439544/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-439544' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 15:54:17.239686 1500765 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 15:54:17.239717 1500765 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 15:54:17.239745 1500765 ubuntu.go:190] setting up certificates
	I1213 15:54:17.239765 1500765 provision.go:84] configureAuth start
	I1213 15:54:17.239834 1500765 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-439544
	I1213 15:54:17.266866 1500765 provision.go:143] copyHostCerts
	I1213 15:54:17.266935 1500765 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 15:54:17.266944 1500765 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 15:54:17.267020 1500765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 15:54:17.267111 1500765 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 15:54:17.267117 1500765 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 15:54:17.267143 1500765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 15:54:17.267192 1500765 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 15:54:17.267197 1500765 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 15:54:17.267220 1500765 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 15:54:17.267264 1500765 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.no-preload-439544 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-439544]
	I1213 15:54:17.442583 1500765 provision.go:177] copyRemoteCerts
	I1213 15:54:17.442703 1500765 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 15:54:17.442791 1500765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 15:54:17.463379 1500765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34193 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 15:54:17.573099 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 15:54:17.605502 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 15:54:17.635705 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 15:54:17.668737 1500765 provision.go:87] duration metric: took 428.930172ms to configureAuth
	I1213 15:54:17.668815 1500765 ubuntu.go:206] setting minikube options for container-runtime
	I1213 15:54:17.669053 1500765 config.go:182] Loaded profile config "no-preload-439544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 15:54:17.669082 1500765 machine.go:97] duration metric: took 4.131227397s to provisionDockerMachine
	I1213 15:54:17.669117 1500765 client.go:176] duration metric: took 6.515841686s to LocalClient.Create
	I1213 15:54:17.669151 1500765 start.go:167] duration metric: took 6.515925188s to libmachine.API.Create "no-preload-439544"
	I1213 15:54:17.669175 1500765 start.go:293] postStartSetup for "no-preload-439544" (driver="docker")
	I1213 15:54:17.669214 1500765 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 15:54:17.669304 1500765 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 15:54:17.669377 1500765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 15:54:17.694411 1500765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34193 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 15:54:17.811340 1500765 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 15:54:17.815907 1500765 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 15:54:17.815939 1500765 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 15:54:17.815950 1500765 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 15:54:17.816011 1500765 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 15:54:17.816098 1500765 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 15:54:17.816215 1500765 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 15:54:17.825221 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 15:54:17.846702 1500765 start.go:296] duration metric: took 177.48524ms for postStartSetup
	I1213 15:54:17.847073 1500765 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-439544
	I1213 15:54:17.866745 1500765 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/config.json ...
	I1213 15:54:17.867032 1500765 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 15:54:17.867076 1500765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 15:54:17.907564 1500765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34193 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 15:54:18.021896 1500765 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 15:54:18.032040 1500765 start.go:128] duration metric: took 6.884570828s to createHost
	I1213 15:54:18.032066 1500765 start.go:83] releasing machines lock for "no-preload-439544", held for 6.884802404s
	I1213 15:54:18.032147 1500765 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-439544
	I1213 15:54:18.065302 1500765 ssh_runner.go:195] Run: cat /version.json
	I1213 15:54:18.065357 1500765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 15:54:18.065579 1500765 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 15:54:18.065644 1500765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 15:54:18.098890 1500765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34193 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 15:54:18.107478 1500765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34193 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 15:54:18.342764 1500765 ssh_runner.go:195] Run: systemctl --version
	I1213 15:54:18.353289 1500765 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 15:54:18.359220 1500765 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 15:54:18.359387 1500765 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 15:54:18.406445 1500765 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 15:54:18.406472 1500765 start.go:496] detecting cgroup driver to use...
	I1213 15:54:18.406514 1500765 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 15:54:18.406570 1500765 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 15:54:18.422997 1500765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 15:54:18.441175 1500765 docker.go:218] disabling cri-docker service (if available) ...
	I1213 15:54:18.441292 1500765 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 15:54:18.469112 1500765 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 15:54:18.489372 1500765 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 15:54:18.684138 1500765 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 15:54:18.860210 1500765 docker.go:234] disabling docker service ...
	I1213 15:54:18.860291 1500765 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 15:54:18.889301 1500765 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 15:54:18.905217 1500765 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 15:54:19.049093 1500765 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 15:54:19.211140 1500765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 15:54:19.227666 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 15:54:19.258528 1500765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 15:54:19.272558 1500765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 15:54:19.292212 1500765 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 15:54:19.292330 1500765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 15:54:19.301609 1500765 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 15:54:19.311672 1500765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 15:54:19.321446 1500765 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 15:54:19.334578 1500765 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 15:54:19.342801 1500765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 15:54:19.352857 1500765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 15:54:19.363818 1500765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 15:54:19.375204 1500765 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 15:54:19.383849 1500765 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 15:54:19.392349 1500765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 15:54:19.560574 1500765 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 15:54:19.661241 1500765 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 15:54:19.661326 1500765 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 15:54:19.669282 1500765 start.go:564] Will wait 60s for crictl version
	I1213 15:54:19.669362 1500765 ssh_runner.go:195] Run: which crictl
	I1213 15:54:19.680104 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 15:54:19.705195 1500765 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 15:54:19.705281 1500765 ssh_runner.go:195] Run: containerd --version
	I1213 15:54:19.730442 1500765 ssh_runner.go:195] Run: containerd --version
	I1213 15:54:19.757881 1500765 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 15:54:19.760807 1500765 cli_runner.go:164] Run: docker network inspect no-preload-439544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 15:54:19.792933 1500765 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 15:54:19.797318 1500765 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 15:54:19.807800 1500765 kubeadm.go:884] updating cluster {Name:no-preload-439544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 15:54:19.807919 1500765 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 15:54:19.807976 1500765 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 15:54:19.833791 1500765 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1213 15:54:19.833820 1500765 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 15:54:19.833869 1500765 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 15:54:19.834073 1500765 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 15:54:19.834194 1500765 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 15:54:19.834281 1500765 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 15:54:19.834361 1500765 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 15:54:19.834440 1500765 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1213 15:54:19.834526 1500765 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1213 15:54:19.834604 1500765 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 15:54:19.837856 1500765 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 15:54:19.838095 1500765 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 15:54:19.838243 1500765 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 15:54:19.838383 1500765 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 15:54:19.838513 1500765 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 15:54:19.838797 1500765 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 15:54:19.838939 1500765 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1213 15:54:19.839123 1500765 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1213 15:54:20.102415 1500765 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1213 15:54:20.102507 1500765 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1213 15:54:20.124146 1500765 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1213 15:54:20.124582 1500765 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 15:54:20.133837 1500765 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1213 15:54:20.133903 1500765 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1213 15:54:20.133965 1500765 ssh_runner.go:195] Run: which crictl
	I1213 15:54:20.145922 1500765 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1213 15:54:20.146003 1500765 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1213 15:54:20.175918 1500765 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1213 15:54:20.175967 1500765 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 15:54:20.176024 1500765 ssh_runner.go:195] Run: which crictl
	I1213 15:54:20.176101 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 15:54:20.177260 1500765 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1213 15:54:20.177330 1500765 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 15:54:20.195029 1500765 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1213 15:54:20.195092 1500765 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 15:54:20.195148 1500765 ssh_runner.go:195] Run: which crictl
	I1213 15:54:20.196768 1500765 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1213 15:54:20.196841 1500765 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 15:54:20.213432 1500765 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1213 15:54:20.213511 1500765 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 15:54:20.216700 1500765 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1213 15:54:20.216791 1500765 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1213 15:54:20.324862 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 15:54:20.324964 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 15:54:20.325116 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 15:54:20.325029 1500765 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1213 15:54:20.325190 1500765 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1213 15:54:20.325224 1500765 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 15:54:20.325245 1500765 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 15:54:20.325262 1500765 ssh_runner.go:195] Run: which crictl
	I1213 15:54:20.325315 1500765 ssh_runner.go:195] Run: which crictl
	I1213 15:54:20.325366 1500765 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1213 15:54:20.325387 1500765 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 15:54:20.325422 1500765 ssh_runner.go:195] Run: which crictl
	I1213 15:54:20.325475 1500765 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1213 15:54:20.325523 1500765 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1213 15:54:20.325575 1500765 ssh_runner.go:195] Run: which crictl
	I1213 15:54:20.393075 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 15:54:20.393172 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 15:54:20.393172 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 15:54:20.393277 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 15:54:20.393385 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 15:54:20.393492 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 15:54:20.393559 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 15:54:20.513005 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 15:54:20.513092 1500765 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1213 15:54:20.513161 1500765 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1213 15:54:20.513219 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 15:54:20.513284 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 15:54:20.513334 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 15:54:20.513409 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 15:54:20.513471 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 15:54:20.617435 1500765 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1213 15:54:20.617540 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 15:54:20.617604 1500765 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1213 15:54:20.617632 1500765 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1213 15:54:20.617662 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1213 15:54:20.617722 1500765 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1213 15:54:20.617742 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 15:54:20.617820 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 15:54:20.617860 1500765 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1213 15:54:20.617877 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 15:54:20.701954 1500765 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1213 15:54:20.702135 1500765 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1213 15:54:20.713760 1500765 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1213 15:54:20.713928 1500765 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1213 15:54:20.714049 1500765 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1213 15:54:20.714104 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1213 15:54:20.714196 1500765 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1213 15:54:20.714290 1500765 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1213 15:54:20.714480 1500765 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1213 15:54:20.714578 1500765 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1213 15:54:20.714754 1500765 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1213 15:54:20.714854 1500765 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1213 15:54:20.754862 1500765 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1213 15:54:20.754958 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1213 15:54:20.941045 1500765 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1213 15:54:20.941097 1500765 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1213 15:54:20.941122 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1213 15:54:20.941180 1500765 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1213 15:54:20.941192 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1213 15:54:20.941222 1500765 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1213 15:54:20.941230 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1213 15:54:20.941259 1500765 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1213 15:54:20.941270 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	W1213 15:54:21.107584 1500765 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1213 15:54:21.107719 1500765 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1213 15:54:21.107790 1500765 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 15:54:21.265607 1500765 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1213 15:54:21.265656 1500765 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 15:54:21.265718 1500765 ssh_runner.go:195] Run: which crictl
	I1213 15:54:21.345903 1500765 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1213 15:54:21.345994 1500765 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1213 15:54:21.358820 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 15:54:23.376240 1500765 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (2.03021667s)
	I1213 15:54:23.376311 1500765 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.017458082s)
	I1213 15:54:23.376315 1500765 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1213 15:54:23.376431 1500765 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1213 15:54:23.376390 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 15:54:23.376583 1500765 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1213 15:54:23.438825 1500765 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 15:54:24.609171 1500765 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.232551186s)
	I1213 15:54:24.609527 1500765 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1213 15:54:24.609559 1500765 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1213 15:54:24.609648 1500765 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1213 15:54:24.609503 1500765 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.170633618s)
	I1213 15:54:24.609747 1500765 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1213 15:54:24.609826 1500765 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1213 15:54:25.513080 1500765 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1213 15:54:25.513239 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1213 15:54:25.513178 1500765 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1213 15:54:25.513344 1500765 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1213 15:54:25.513399 1500765 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1213 15:54:27.065802 1500765 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.552382809s)
	I1213 15:54:27.065825 1500765 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1213 15:54:27.065842 1500765 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1213 15:54:27.065893 1500765 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1213 15:54:28.232479 1500765 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.16656357s)
	I1213 15:54:28.232505 1500765 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1213 15:54:28.232524 1500765 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1213 15:54:28.232573 1500765 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1213 15:54:29.401492 1500765 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.168894486s)
	I1213 15:54:29.401520 1500765 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1213 15:54:29.401543 1500765 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1213 15:54:29.401601 1500765 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1213 15:54:29.802124 1500765 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1213 15:54:29.802159 1500765 cache_images.go:125] Successfully loaded all cached images
	I1213 15:54:29.802165 1500765 cache_images.go:94] duration metric: took 9.968332423s to LoadCachedImages
	I1213 15:54:29.802178 1500765 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 15:54:29.802273 1500765 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-439544 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 15:54:29.802342 1500765 ssh_runner.go:195] Run: sudo crictl info
	I1213 15:54:29.836192 1500765 cni.go:84] Creating CNI manager for ""
	I1213 15:54:29.836233 1500765 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 15:54:29.836254 1500765 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 15:54:29.836282 1500765 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-439544 NodeName:no-preload-439544 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 15:54:29.836413 1500765 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-439544"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 15:54:29.836500 1500765 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 15:54:29.845481 1500765 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1213 15:54:29.845558 1500765 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 15:54:29.854064 1500765 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1213 15:54:29.854162 1500765 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1213 15:54:29.855003 1500765 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet
	I1213 15:54:29.855448 1500765 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm
	I1213 15:54:29.859489 1500765 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1213 15:54:29.859525 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1213 15:54:30.809511 1500765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 15:54:30.832341 1500765 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1213 15:54:30.839924 1500765 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1213 15:54:30.840013 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1213 15:54:30.862246 1500765 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1213 15:54:30.899927 1500765 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1213 15:54:30.899982 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1213 15:54:31.642641 1500765 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 15:54:31.659769 1500765 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 15:54:31.685148 1500765 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 15:54:31.709397 1500765 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 15:54:31.730843 1500765 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 15:54:31.737634 1500765 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 15:54:31.751413 1500765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 15:54:31.889751 1500765 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 15:54:31.919820 1500765 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544 for IP: 192.168.85.2
	I1213 15:54:31.919886 1500765 certs.go:195] generating shared ca certs ...
	I1213 15:54:31.919915 1500765 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 15:54:31.920093 1500765 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 15:54:31.920162 1500765 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 15:54:31.920184 1500765 certs.go:257] generating profile certs ...
	I1213 15:54:31.920329 1500765 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/client.key
	I1213 15:54:31.920364 1500765 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/client.crt with IP's: []
	I1213 15:54:32.368590 1500765 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/client.crt ...
	I1213 15:54:32.368624 1500765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/client.crt: {Name:mk557c7fd35912d4c33cb25b7b6fda18b00cb01e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 15:54:32.368834 1500765 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/client.key ...
	I1213 15:54:32.368852 1500765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/client.key: {Name:mk75861831afe9a6501d9e3e6d910905f6de16e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 15:54:32.368940 1500765 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/apiserver.key.75137389
	I1213 15:54:32.368960 1500765 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/apiserver.crt.75137389 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 15:54:32.714596 1500765 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/apiserver.crt.75137389 ...
	I1213 15:54:32.714631 1500765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/apiserver.crt.75137389: {Name:mk3c76caf20024dc67ba2a62e537346485f7fb57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 15:54:32.714831 1500765 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/apiserver.key.75137389 ...
	I1213 15:54:32.714848 1500765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/apiserver.key.75137389: {Name:mk8972c5c39063fb7ce387e3767e5bb383587fb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 15:54:32.714941 1500765 certs.go:382] copying /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/apiserver.crt.75137389 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/apiserver.crt
	I1213 15:54:32.715025 1500765 certs.go:386] copying /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/apiserver.key.75137389 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/apiserver.key
	I1213 15:54:32.715093 1500765 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/proxy-client.key
	I1213 15:54:32.715113 1500765 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/proxy-client.crt with IP's: []
	I1213 15:54:32.892299 1500765 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/proxy-client.crt ...
	I1213 15:54:32.892335 1500765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/proxy-client.crt: {Name:mk0b1fcda9f39b8f2416fb61e2744e8cb39afe84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 15:54:32.892556 1500765 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/proxy-client.key ...
	I1213 15:54:32.892572 1500765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/proxy-client.key: {Name:mk7643672877ed60c910d12f588ede4ce105257f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 15:54:32.892771 1500765 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 15:54:32.892823 1500765 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 15:54:32.892837 1500765 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 15:54:32.892864 1500765 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 15:54:32.892895 1500765 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 15:54:32.892927 1500765 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 15:54:32.892980 1500765 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 15:54:32.893612 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 15:54:32.924843 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 15:54:32.949472 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 15:54:32.977329 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 15:54:32.999213 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 15:54:33.024265 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 15:54:33.047720 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 15:54:33.070859 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 15:54:33.095331 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 15:54:33.118621 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 15:54:33.140700 1500765 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 15:54:33.162162 1500765 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 15:54:33.176578 1500765 ssh_runner.go:195] Run: openssl version
	I1213 15:54:33.183999 1500765 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 15:54:33.192769 1500765 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 15:54:33.201583 1500765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 15:54:33.206001 1500765 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 15:54:33.206064 1500765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 15:54:33.248347 1500765 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 15:54:33.257141 1500765 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12529342.pem /etc/ssl/certs/3ec20f2e.0
	I1213 15:54:33.265024 1500765 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 15:54:33.272814 1500765 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 15:54:33.281098 1500765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 15:54:33.286878 1500765 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 15:54:33.287003 1500765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 15:54:33.328627 1500765 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 15:54:33.338099 1500765 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 15:54:33.346011 1500765 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 15:54:33.354151 1500765 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 15:54:33.364327 1500765 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 15:54:33.368349 1500765 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 15:54:33.368445 1500765 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 15:54:33.410004 1500765 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 15:54:33.418033 1500765 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1252934.pem /etc/ssl/certs/51391683.0
	I1213 15:54:33.426189 1500765 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 15:54:33.430212 1500765 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 15:54:33.430264 1500765 kubeadm.go:401] StartCluster: {Name:no-preload-439544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 15:54:33.430335 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 15:54:33.430393 1500765 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 15:54:33.464359 1500765 cri.go:89] found id: ""
	I1213 15:54:33.464431 1500765 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 15:54:33.479524 1500765 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 15:54:33.490217 1500765 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 15:54:33.490286 1500765 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 15:54:33.502706 1500765 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 15:54:33.502741 1500765 kubeadm.go:158] found existing configuration files:
	
	I1213 15:54:33.502800 1500765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 15:54:33.514085 1500765 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 15:54:33.514159 1500765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 15:54:33.523555 1500765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 15:54:33.536954 1500765 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 15:54:33.537042 1500765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 15:54:33.546279 1500765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 15:54:33.556686 1500765 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 15:54:33.556767 1500765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 15:54:33.568601 1500765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 15:54:33.577579 1500765 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 15:54:33.577647 1500765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 15:54:33.597831 1500765 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 15:54:33.655968 1500765 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 15:54:33.656282 1500765 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 15:54:33.734247 1500765 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 15:54:33.734405 1500765 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 15:54:33.734464 1500765 kubeadm.go:319] OS: Linux
	I1213 15:54:33.734544 1500765 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 15:54:33.734616 1500765 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 15:54:33.734692 1500765 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 15:54:33.734773 1500765 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 15:54:33.734851 1500765 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 15:54:33.734924 1500765 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 15:54:33.735003 1500765 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 15:54:33.735074 1500765 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 15:54:33.735148 1500765 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 15:54:33.807063 1500765 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 15:54:33.807239 1500765 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 15:54:33.807386 1500765 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 15:54:33.815805 1500765 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 15:54:33.818601 1500765 out.go:252]   - Generating certificates and keys ...
	I1213 15:54:33.818728 1500765 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 15:54:33.818812 1500765 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 15:54:34.068636 1500765 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 15:54:34.654848 1500765 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 15:54:34.817096 1500765 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 15:54:34.931112 1500765 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 15:54:35.174102 1500765 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 15:54:35.174644 1500765 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-439544] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 15:54:35.443295 1500765 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 15:54:35.443851 1500765 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-439544] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 15:54:35.820768 1500765 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 15:54:36.274940 1500765 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 15:54:36.654945 1500765 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 15:54:36.655017 1500765 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 15:54:36.938906 1500765 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 15:54:37.941036 1500765 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 15:54:38.274964 1500765 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 15:54:38.534724 1500765 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 15:54:39.064061 1500765 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 15:54:39.065229 1500765 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 15:54:39.071693 1500765 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 15:54:39.077893 1500765 out.go:252]   - Booting up control plane ...
	I1213 15:54:39.078011 1500765 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 15:54:39.078095 1500765 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 15:54:39.078170 1500765 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 15:54:39.091003 1500765 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 15:54:39.091135 1500765 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 15:54:39.102761 1500765 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 15:54:39.104813 1500765 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 15:54:39.104900 1500765 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 15:54:39.260292 1500765 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 15:54:39.260419 1500765 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 15:58:39.261039 1500765 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000807853s
	I1213 15:58:39.261089 1500765 kubeadm.go:319] 
	I1213 15:58:39.261148 1500765 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 15:58:39.261187 1500765 kubeadm.go:319] 	- The kubelet is not running
	I1213 15:58:39.261296 1500765 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 15:58:39.261304 1500765 kubeadm.go:319] 
	I1213 15:58:39.261672 1500765 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 15:58:39.261758 1500765 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 15:58:39.261816 1500765 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 15:58:39.261825 1500765 kubeadm.go:319] 
	I1213 15:58:39.266986 1500765 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 15:58:39.267438 1500765 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 15:58:39.267565 1500765 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 15:58:39.267836 1500765 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 15:58:39.267849 1500765 kubeadm.go:319] 
	I1213 15:58:39.267937 1500765 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 15:58:39.268045 1500765 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-439544] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-439544] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000807853s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-439544] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-439544] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000807853s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 15:58:39.268127 1500765 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 15:58:39.707944 1500765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 15:58:39.722825 1500765 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 15:58:39.722891 1500765 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 15:58:39.730872 1500765 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 15:58:39.730891 1500765 kubeadm.go:158] found existing configuration files:
	
	I1213 15:58:39.730943 1500765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 15:58:39.738863 1500765 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 15:58:39.738930 1500765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 15:58:39.746876 1500765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 15:58:39.754819 1500765 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 15:58:39.754938 1500765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 15:58:39.762774 1500765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 15:58:39.771502 1500765 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 15:58:39.771591 1500765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 15:58:39.779685 1500765 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 15:58:39.788014 1500765 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 15:58:39.788103 1500765 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 15:58:39.798312 1500765 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 15:58:39.920925 1500765 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 15:58:39.921406 1500765 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 15:58:39.986848 1500765 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 16:02:41.545926 1500765 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 16:02:41.546108 1500765 kubeadm.go:319] 
	I1213 16:02:41.546236 1500765 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 16:02:41.551134 1500765 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 16:02:41.551190 1500765 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 16:02:41.551289 1500765 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 16:02:41.551373 1500765 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 16:02:41.551414 1500765 kubeadm.go:319] OS: Linux
	I1213 16:02:41.551459 1500765 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 16:02:41.551511 1500765 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 16:02:41.551561 1500765 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 16:02:41.551612 1500765 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 16:02:41.551663 1500765 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 16:02:41.551715 1500765 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 16:02:41.551764 1500765 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 16:02:41.551816 1500765 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 16:02:41.551866 1500765 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 16:02:41.551941 1500765 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 16:02:41.552042 1500765 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 16:02:41.552133 1500765 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 16:02:41.552199 1500765 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 16:02:41.555522 1500765 out.go:252]   - Generating certificates and keys ...
	I1213 16:02:41.555641 1500765 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 16:02:41.555717 1500765 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 16:02:41.555797 1500765 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 16:02:41.555873 1500765 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 16:02:41.555970 1500765 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 16:02:41.556031 1500765 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 16:02:41.556110 1500765 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 16:02:41.556213 1500765 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 16:02:41.556310 1500765 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 16:02:41.556431 1500765 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 16:02:41.556486 1500765 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 16:02:41.556559 1500765 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 16:02:41.556617 1500765 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 16:02:41.556678 1500765 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 16:02:41.556736 1500765 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 16:02:41.556817 1500765 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 16:02:41.556888 1500765 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 16:02:41.556980 1500765 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 16:02:41.557075 1500765 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 16:02:41.560042 1500765 out.go:252]   - Booting up control plane ...
	I1213 16:02:41.560143 1500765 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 16:02:41.560258 1500765 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 16:02:41.560348 1500765 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 16:02:41.560479 1500765 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 16:02:41.560588 1500765 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 16:02:41.560701 1500765 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 16:02:41.560824 1500765 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 16:02:41.560880 1500765 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 16:02:41.561017 1500765 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 16:02:41.561131 1500765 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 16:02:41.561233 1500765 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000293839s
	I1213 16:02:41.561265 1500765 kubeadm.go:319] 
	I1213 16:02:41.561329 1500765 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 16:02:41.561367 1500765 kubeadm.go:319] 	- The kubelet is not running
	I1213 16:02:41.561492 1500765 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 16:02:41.561506 1500765 kubeadm.go:319] 
	I1213 16:02:41.561630 1500765 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 16:02:41.561673 1500765 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 16:02:41.561708 1500765 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 16:02:41.561776 1500765 kubeadm.go:319] 
	I1213 16:02:41.561777 1500765 kubeadm.go:403] duration metric: took 8m8.131517099s to StartCluster
	I1213 16:02:41.561824 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:02:41.561903 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:02:41.594564 1500765 cri.go:89] found id: ""
	I1213 16:02:41.594594 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.594603 1500765 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:02:41.594609 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:02:41.594677 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:02:41.629231 1500765 cri.go:89] found id: ""
	I1213 16:02:41.629252 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.629260 1500765 logs.go:284] No container was found matching "etcd"
	I1213 16:02:41.629266 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:02:41.629322 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:02:41.656157 1500765 cri.go:89] found id: ""
	I1213 16:02:41.656181 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.656190 1500765 logs.go:284] No container was found matching "coredns"
	I1213 16:02:41.656196 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:02:41.656276 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:02:41.681173 1500765 cri.go:89] found id: ""
	I1213 16:02:41.681208 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.681217 1500765 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:02:41.681224 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:02:41.681308 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:02:41.708543 1500765 cri.go:89] found id: ""
	I1213 16:02:41.708568 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.708577 1500765 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:02:41.708583 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:02:41.708660 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:02:41.737039 1500765 cri.go:89] found id: ""
	I1213 16:02:41.737062 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.737071 1500765 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:02:41.737079 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:02:41.737137 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:02:41.762249 1500765 cri.go:89] found id: ""
	I1213 16:02:41.762275 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.762283 1500765 logs.go:284] No container was found matching "kindnet"
	I1213 16:02:41.762294 1500765 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:02:41.762306 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:02:41.828774 1500765 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:02:41.820905    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.821458    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.823177    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.823750    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.825347    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:02:41.820905    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.821458    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.823177    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.823750    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.825347    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:02:41.828797 1500765 logs.go:123] Gathering logs for containerd ...
	I1213 16:02:41.828810 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:02:41.870479 1500765 logs.go:123] Gathering logs for container status ...
	I1213 16:02:41.870512 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:02:41.897347 1500765 logs.go:123] Gathering logs for kubelet ...
	I1213 16:02:41.897374 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:02:41.954515 1500765 logs.go:123] Gathering logs for dmesg ...
	I1213 16:02:41.954549 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 16:02:41.971648 1500765 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000293839s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 16:02:41.971703 1500765 out.go:285] * 
	* 
	W1213 16:02:41.971970 1500765 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000293839s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000293839s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 16:02:41.971990 1500765 out.go:285] * 
	* 
	W1213 16:02:41.974206 1500765 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 16:02:41.979727 1500765 out.go:203] 
	W1213 16:02:41.982586 1500765 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000293839s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000293839s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 16:02:41.982624 1500765 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 16:02:41.982645 1500765 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 16:02:41.985873 1500765 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p no-preload-439544 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-439544
helpers_test.go:244: (dbg) docker inspect no-preload-439544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d",
	        "Created": "2025-12-13T15:54:12.178460013Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1501116,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T15:54:12.242684028Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/hostname",
	        "HostsPath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/hosts",
	        "LogPath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d-json.log",
	        "Name": "/no-preload-439544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-439544:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-439544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d",
	                "LowerDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-439544",
	                "Source": "/var/lib/docker/volumes/no-preload-439544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-439544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-439544",
	                "name.minikube.sigs.k8s.io": "no-preload-439544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da8c56f1648b4b29d365160a5c9c8f4b83511f3b06bb300dab72442b5fe339b6",
	            "SandboxKey": "/var/run/docker/netns/da8c56f1648b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34193"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34194"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34197"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34195"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34196"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-439544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:8c:8a:2b:c2:e1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "77a202cfc0163f00e436e53caf7341626192055eeb5da6a6f5d953ced7f7adfb",
	                    "EndpointID": "2d33c5fac6c3fc25d8e7af1d5a5218284f13ab87b543c41deb4d4804231c62b5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-439544",
	                        "53d57adb0653"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-439544 -n no-preload-439544
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-439544 -n no-preload-439544: exit status 6 (362.406346ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 16:02:42.462115 1529319 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-439544" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-439544 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ unpause │ -p old-k8s-version-912710 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-912710       │ jenkins │ v1.37.0 │ 13 Dec 25 15:56 UTC │ 13 Dec 25 15:56 UTC │
	│ delete  │ -p old-k8s-version-912710                                                                                                                                                                                                                                  │ old-k8s-version-912710       │ jenkins │ v1.37.0 │ 13 Dec 25 15:56 UTC │ 13 Dec 25 15:56 UTC │
	│ delete  │ -p old-k8s-version-912710                                                                                                                                                                                                                                  │ old-k8s-version-912710       │ jenkins │ v1.37.0 │ 13 Dec 25 15:56 UTC │ 13 Dec 25 15:56 UTC │
	│ start   │ -p embed-certs-270324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:56 UTC │ 13 Dec 25 15:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-270324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:57 UTC │ 13 Dec 25 15:57 UTC │
	│ stop    │ -p embed-certs-270324 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:57 UTC │ 13 Dec 25 15:58 UTC │
	│ addons  │ enable dashboard -p embed-certs-270324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:58 UTC │ 13 Dec 25 15:58 UTC │
	│ start   │ -p embed-certs-270324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:58 UTC │ 13 Dec 25 15:58 UTC │
	│ image   │ embed-certs-270324 image list --format=json                                                                                                                                                                                                                │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ pause   │ -p embed-certs-270324 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ unpause │ -p embed-certs-270324 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p embed-certs-270324                                                                                                                                                                                                                                      │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p embed-certs-270324                                                                                                                                                                                                                                      │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p disable-driver-mounts-614298                                                                                                                                                                                                                            │ disable-driver-mounts-614298 │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ start   │ -p default-k8s-diff-port-946932 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 16:00 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-946932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:00 UTC │ 13 Dec 25 16:00 UTC │
	│ stop    │ -p default-k8s-diff-port-946932 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:00 UTC │ 13 Dec 25 16:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-946932 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:01 UTC │ 13 Dec 25 16:01 UTC │
	│ start   │ -p default-k8s-diff-port-946932 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:01 UTC │ 13 Dec 25 16:01 UTC │
	│ image   │ default-k8s-diff-port-946932 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ pause   │ -p default-k8s-diff-port-946932 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ unpause │ -p default-k8s-diff-port-946932 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ delete  │ -p default-k8s-diff-port-946932                                                                                                                                                                                                                            │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ delete  │ -p default-k8s-diff-port-946932                                                                                                                                                                                                                            │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ start   │ -p newest-cni-526531 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 16:02:10
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 16:02:10.653265 1527131 out.go:360] Setting OutFile to fd 1 ...
	I1213 16:02:10.653450 1527131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:02:10.653463 1527131 out.go:374] Setting ErrFile to fd 2...
	I1213 16:02:10.653469 1527131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:02:10.653723 1527131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 16:02:10.654178 1527131 out.go:368] Setting JSON to false
	I1213 16:02:10.655121 1527131 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":27880,"bootTime":1765613851,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 16:02:10.655187 1527131 start.go:143] virtualization:  
	I1213 16:02:10.659173 1527131 out.go:179] * [newest-cni-526531] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 16:02:10.663186 1527131 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 16:02:10.663301 1527131 notify.go:221] Checking for updates...
	I1213 16:02:10.669662 1527131 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 16:02:10.672735 1527131 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:02:10.675695 1527131 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 16:02:10.678798 1527131 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 16:02:10.681784 1527131 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 16:02:10.685234 1527131 config.go:182] Loaded profile config "no-preload-439544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:02:10.685327 1527131 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 16:02:10.712873 1527131 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 16:02:10.712998 1527131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:02:10.776591 1527131 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:02:10.767542878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:02:10.776698 1527131 docker.go:319] overlay module found
	I1213 16:02:10.779851 1527131 out.go:179] * Using the docker driver based on user configuration
	I1213 16:02:10.782749 1527131 start.go:309] selected driver: docker
	I1213 16:02:10.782766 1527131 start.go:927] validating driver "docker" against <nil>
	I1213 16:02:10.782781 1527131 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 16:02:10.783532 1527131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:02:10.836394 1527131 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:02:10.826578222 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:02:10.836552 1527131 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 16:02:10.836580 1527131 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 16:02:10.836798 1527131 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 16:02:10.839799 1527131 out.go:179] * Using Docker driver with root privileges
	I1213 16:02:10.842710 1527131 cni.go:84] Creating CNI manager for ""
	I1213 16:02:10.842780 1527131 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:02:10.842796 1527131 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 16:02:10.842882 1527131 start.go:353] cluster config:
	{Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:02:10.846082 1527131 out.go:179] * Starting "newest-cni-526531" primary control-plane node in "newest-cni-526531" cluster
	I1213 16:02:10.848967 1527131 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 16:02:10.851950 1527131 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 16:02:10.854779 1527131 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:02:10.854844 1527131 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 16:02:10.854855 1527131 cache.go:65] Caching tarball of preloaded images
	I1213 16:02:10.854853 1527131 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 16:02:10.854953 1527131 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 16:02:10.854964 1527131 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 16:02:10.855092 1527131 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/config.json ...
	I1213 16:02:10.855111 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/config.json: {Name:mk86a24d01142c8f16a845d4170f48ade207872d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:10.882520 1527131 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 16:02:10.882541 1527131 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 16:02:10.882562 1527131 cache.go:243] Successfully downloaded all kic artifacts
	I1213 16:02:10.882591 1527131 start.go:360] acquireMachinesLock for newest-cni-526531: {Name:mk8328ad899404812480e264edd6e13cbbd26230 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:02:10.883398 1527131 start.go:364] duration metric: took 789.437µs to acquireMachinesLock for "newest-cni-526531"
	I1213 16:02:10.883434 1527131 start.go:93] Provisioning new machine with config: &{Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 16:02:10.883509 1527131 start.go:125] createHost starting for "" (driver="docker")
	I1213 16:02:10.886860 1527131 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 16:02:10.887084 1527131 start.go:159] libmachine.API.Create for "newest-cni-526531" (driver="docker")
	I1213 16:02:10.887118 1527131 client.go:173] LocalClient.Create starting
	I1213 16:02:10.887190 1527131 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem
	I1213 16:02:10.887231 1527131 main.go:143] libmachine: Decoding PEM data...
	I1213 16:02:10.887246 1527131 main.go:143] libmachine: Parsing certificate...
	I1213 16:02:10.887296 1527131 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem
	I1213 16:02:10.887414 1527131 main.go:143] libmachine: Decoding PEM data...
	I1213 16:02:10.887431 1527131 main.go:143] libmachine: Parsing certificate...
	I1213 16:02:10.887816 1527131 cli_runner.go:164] Run: docker network inspect newest-cni-526531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 16:02:10.908607 1527131 cli_runner.go:211] docker network inspect newest-cni-526531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 16:02:10.908685 1527131 network_create.go:284] running [docker network inspect newest-cni-526531] to gather additional debugging logs...
	I1213 16:02:10.908709 1527131 cli_runner.go:164] Run: docker network inspect newest-cni-526531
	W1213 16:02:10.924665 1527131 cli_runner.go:211] docker network inspect newest-cni-526531 returned with exit code 1
	I1213 16:02:10.924698 1527131 network_create.go:287] error running [docker network inspect newest-cni-526531]: docker network inspect newest-cni-526531: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-526531 not found
	I1213 16:02:10.924713 1527131 network_create.go:289] output of [docker network inspect newest-cni-526531]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-526531 not found
	
	** /stderr **
	I1213 16:02:10.924834 1527131 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 16:02:10.945123 1527131 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2ccf9a9eb2c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:9e:64:ed:f4:f2} reservation:<nil>}
	I1213 16:02:10.945400 1527131 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f2add06a95dc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:3f:19:f0:2f:b1} reservation:<nil>}
	I1213 16:02:10.945650 1527131 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8517ffe6861d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:d6:d4:51:8b:3d} reservation:<nil>}
	I1213 16:02:10.946092 1527131 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a39030}
	I1213 16:02:10.946118 1527131 network_create.go:124] attempt to create docker network newest-cni-526531 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 16:02:10.946180 1527131 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-526531 newest-cni-526531
	I1213 16:02:11.005690 1527131 network_create.go:108] docker network newest-cni-526531 192.168.76.0/24 created
	I1213 16:02:11.005737 1527131 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-526531" container
	I1213 16:02:11.005844 1527131 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 16:02:11.023684 1527131 cli_runner.go:164] Run: docker volume create newest-cni-526531 --label name.minikube.sigs.k8s.io=newest-cni-526531 --label created_by.minikube.sigs.k8s.io=true
	I1213 16:02:11.043087 1527131 oci.go:103] Successfully created a docker volume newest-cni-526531
	I1213 16:02:11.043189 1527131 cli_runner.go:164] Run: docker run --rm --name newest-cni-526531-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-526531 --entrypoint /usr/bin/test -v newest-cni-526531:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 16:02:11.614357 1527131 oci.go:107] Successfully prepared a docker volume newest-cni-526531
	I1213 16:02:11.614420 1527131 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:02:11.614431 1527131 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 16:02:11.614506 1527131 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-526531:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 16:02:15.477407 1527131 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-526531:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.862862091s)
	I1213 16:02:15.477459 1527131 kic.go:203] duration metric: took 3.863024311s to extract preloaded images to volume ...
	W1213 16:02:15.477597 1527131 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 16:02:15.477708 1527131 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 16:02:15.532223 1527131 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-526531 --name newest-cni-526531 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-526531 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-526531 --network newest-cni-526531 --ip 192.168.76.2 --volume newest-cni-526531:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 16:02:15.845102 1527131 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Running}}
	I1213 16:02:15.866861 1527131 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:02:15.892916 1527131 cli_runner.go:164] Run: docker exec newest-cni-526531 stat /var/lib/dpkg/alternatives/iptables
	I1213 16:02:15.948563 1527131 oci.go:144] the created container "newest-cni-526531" has a running status.
	I1213 16:02:15.948590 1527131 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa...
	I1213 16:02:16.266786 1527131 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 16:02:16.296564 1527131 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:02:16.329593 1527131 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 16:02:16.329619 1527131 kic_runner.go:114] Args: [docker exec --privileged newest-cni-526531 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 16:02:16.396781 1527131 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:02:16.416507 1527131 machine.go:94] provisionDockerMachine start ...
	I1213 16:02:16.416610 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:16.437096 1527131 main.go:143] libmachine: Using SSH client type: native
	I1213 16:02:16.437445 1527131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34223 <nil> <nil>}
	I1213 16:02:16.437455 1527131 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 16:02:16.438031 1527131 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45440->127.0.0.1:34223: read: connection reset by peer
	I1213 16:02:19.590785 1527131 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-526531
	
	I1213 16:02:19.590808 1527131 ubuntu.go:182] provisioning hostname "newest-cni-526531"
	I1213 16:02:19.590880 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:19.609205 1527131 main.go:143] libmachine: Using SSH client type: native
	I1213 16:02:19.609519 1527131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34223 <nil> <nil>}
	I1213 16:02:19.609531 1527131 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-526531 && echo "newest-cni-526531" | sudo tee /etc/hostname
	I1213 16:02:19.768653 1527131 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-526531
	
	I1213 16:02:19.768776 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:19.785859 1527131 main.go:143] libmachine: Using SSH client type: native
	I1213 16:02:19.786173 1527131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34223 <nil> <nil>}
	I1213 16:02:19.786190 1527131 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-526531' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-526531/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-526531' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 16:02:19.943619 1527131 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 16:02:19.943646 1527131 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 16:02:19.943683 1527131 ubuntu.go:190] setting up certificates
	I1213 16:02:19.943694 1527131 provision.go:84] configureAuth start
	I1213 16:02:19.943767 1527131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:02:19.960971 1527131 provision.go:143] copyHostCerts
	I1213 16:02:19.961044 1527131 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 16:02:19.961058 1527131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 16:02:19.961139 1527131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 16:02:19.961239 1527131 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 16:02:19.961249 1527131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 16:02:19.961277 1527131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 16:02:19.961346 1527131 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 16:02:19.961355 1527131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 16:02:19.961380 1527131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 16:02:19.961441 1527131 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.newest-cni-526531 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-526531]
	I1213 16:02:20.054612 1527131 provision.go:177] copyRemoteCerts
	I1213 16:02:20.054686 1527131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 16:02:20.054736 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.072851 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.179668 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 16:02:20.198845 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 16:02:20.217676 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 16:02:20.236010 1527131 provision.go:87] duration metric: took 292.302594ms to configureAuth
	I1213 16:02:20.236050 1527131 ubuntu.go:206] setting minikube options for container-runtime
	I1213 16:02:20.236287 1527131 config.go:182] Loaded profile config "newest-cni-526531": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:02:20.236298 1527131 machine.go:97] duration metric: took 3.819772251s to provisionDockerMachine
	I1213 16:02:20.236311 1527131 client.go:176] duration metric: took 9.349180869s to LocalClient.Create
	I1213 16:02:20.236333 1527131 start.go:167] duration metric: took 9.349249118s to libmachine.API.Create "newest-cni-526531"
	I1213 16:02:20.236344 1527131 start.go:293] postStartSetup for "newest-cni-526531" (driver="docker")
	I1213 16:02:20.236355 1527131 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 16:02:20.236412 1527131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 16:02:20.236459 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.253931 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.359511 1527131 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 16:02:20.363075 1527131 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 16:02:20.363102 1527131 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 16:02:20.363114 1527131 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 16:02:20.363170 1527131 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 16:02:20.363253 1527131 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 16:02:20.363383 1527131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 16:02:20.370977 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:02:20.388811 1527131 start.go:296] duration metric: took 152.451817ms for postStartSetup
	I1213 16:02:20.389184 1527131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:02:20.406647 1527131 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/config.json ...
	I1213 16:02:20.406930 1527131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 16:02:20.406975 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.424459 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.529476 1527131 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 16:02:20.539030 1527131 start.go:128] duration metric: took 9.655490819s to createHost
	I1213 16:02:20.539056 1527131 start.go:83] releasing machines lock for "newest-cni-526531", held for 9.655642684s
	I1213 16:02:20.539196 1527131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:02:20.566091 1527131 ssh_runner.go:195] Run: cat /version.json
	I1213 16:02:20.566128 1527131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 16:02:20.566142 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.566184 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.588830 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.608973 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.799431 1527131 ssh_runner.go:195] Run: systemctl --version
	I1213 16:02:20.806227 1527131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 16:02:20.810716 1527131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 16:02:20.810789 1527131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 16:02:20.839037 1527131 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 16:02:20.839104 1527131 start.go:496] detecting cgroup driver to use...
	I1213 16:02:20.839151 1527131 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 16:02:20.839236 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 16:02:20.854464 1527131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 16:02:20.867574 1527131 docker.go:218] disabling cri-docker service (if available) ...
	I1213 16:02:20.867669 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 16:02:20.885257 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 16:02:20.903596 1527131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 16:02:21.022899 1527131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 16:02:21.152487 1527131 docker.go:234] disabling docker service ...
	I1213 16:02:21.152550 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 16:02:21.174727 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 16:02:21.188382 1527131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 16:02:21.299657 1527131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 16:02:21.434130 1527131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 16:02:21.446805 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 16:02:21.461400 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 16:02:21.470517 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 16:02:21.479694 1527131 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 16:02:21.479759 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 16:02:21.494124 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:02:21.502957 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 16:02:21.512551 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:02:21.521611 1527131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 16:02:21.530083 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 16:02:21.539325 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 16:02:21.548742 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 16:02:21.557617 1527131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 16:02:21.565268 1527131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 16:02:21.572714 1527131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:02:21.683769 1527131 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 16:02:21.823560 1527131 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 16:02:21.823710 1527131 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 16:02:21.827515 1527131 start.go:564] Will wait 60s for crictl version
	I1213 16:02:21.827583 1527131 ssh_runner.go:195] Run: which crictl
	I1213 16:02:21.831175 1527131 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 16:02:21.854565 1527131 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 16:02:21.854637 1527131 ssh_runner.go:195] Run: containerd --version
	I1213 16:02:21.878720 1527131 ssh_runner.go:195] Run: containerd --version
	I1213 16:02:21.901809 1527131 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 16:02:21.904695 1527131 cli_runner.go:164] Run: docker network inspect newest-cni-526531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 16:02:21.920670 1527131 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 16:02:21.924637 1527131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:02:21.937646 1527131 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 16:02:21.940537 1527131 kubeadm.go:884] updating cluster {Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 16:02:21.940697 1527131 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:02:21.940787 1527131 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:02:21.972241 1527131 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:02:21.972268 1527131 containerd.go:534] Images already preloaded, skipping extraction
	I1213 16:02:21.972335 1527131 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:02:22.011228 1527131 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:02:22.011254 1527131 cache_images.go:86] Images are preloaded, skipping loading
	I1213 16:02:22.011263 1527131 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 16:02:22.011415 1527131 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-526531 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 16:02:22.011503 1527131 ssh_runner.go:195] Run: sudo crictl info
	I1213 16:02:22.037059 1527131 cni.go:84] Creating CNI manager for ""
	I1213 16:02:22.037085 1527131 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:02:22.037100 1527131 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 16:02:22.037123 1527131 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-526531 NodeName:newest-cni-526531 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 16:02:22.037245 1527131 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-526531"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 16:02:22.037324 1527131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 16:02:22.045616 1527131 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 16:02:22.045746 1527131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 16:02:22.054164 1527131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 16:02:22.068023 1527131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 16:02:22.085623 1527131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 16:02:22.101118 1527131 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 16:02:22.105257 1527131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:02:22.115696 1527131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:02:22.236674 1527131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:02:22.253725 1527131 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531 for IP: 192.168.76.2
	I1213 16:02:22.253801 1527131 certs.go:195] generating shared ca certs ...
	I1213 16:02:22.253832 1527131 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.254016 1527131 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 16:02:22.254124 1527131 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 16:02:22.254153 1527131 certs.go:257] generating profile certs ...
	I1213 16:02:22.254236 1527131 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.key
	I1213 16:02:22.254267 1527131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.crt with IP's: []
	I1213 16:02:22.746862 1527131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.crt ...
	I1213 16:02:22.746902 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.crt: {Name:mk7b618219326f9fba540570e126db6afef7db97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.747100 1527131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.key ...
	I1213 16:02:22.747113 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.key: {Name:mkadefb7fb5fbcd2154d988162829a52daab8655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.747208 1527131 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key.b007e6c7
	I1213 16:02:22.747225 1527131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt.b007e6c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1213 16:02:22.809461 1527131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt.b007e6c7 ...
	I1213 16:02:22.809493 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt.b007e6c7: {Name:mkce6931933926d60edd03298cb3538c188eea65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.809651 1527131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key.b007e6c7 ...
	I1213 16:02:22.809660 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key.b007e6c7: {Name:mk5267764b911bf176ac97c9b4dd7d199f6b5ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.809731 1527131 certs.go:382] copying /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt.b007e6c7 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt
	I1213 16:02:22.809817 1527131 certs.go:386] copying /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key.b007e6c7 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key
	I1213 16:02:22.809875 1527131 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key
	I1213 16:02:22.809898 1527131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.crt with IP's: []
	I1213 16:02:23.001038 1527131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.crt ...
	I1213 16:02:23.001077 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.crt: {Name:mk387ba28125d038f533411623a4bd220070ddcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:23.002037 1527131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key ...
	I1213 16:02:23.002079 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key: {Name:mk1a039510f32e55e5dd18d9c94a59fef628608a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:23.002321 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 16:02:23.002370 1527131 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 16:02:23.002380 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 16:02:23.002408 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 16:02:23.002444 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 16:02:23.002470 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 16:02:23.002520 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:02:23.003157 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 16:02:23.024481 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 16:02:23.042947 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 16:02:23.062246 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 16:02:23.080909 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 16:02:23.101609 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 16:02:23.121532 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 16:02:23.141397 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1213 16:02:23.162222 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 16:02:23.180800 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 16:02:23.199086 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 16:02:23.216531 1527131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 16:02:23.229620 1527131 ssh_runner.go:195] Run: openssl version
	I1213 16:02:23.236222 1527131 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 16:02:23.244051 1527131 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 16:02:23.251982 1527131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 16:02:23.255821 1527131 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 16:02:23.255903 1527131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 16:02:23.297335 1527131 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 16:02:23.305087 1527131 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12529342.pem /etc/ssl/certs/3ec20f2e.0
	I1213 16:02:23.312878 1527131 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:02:23.320527 1527131 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 16:02:23.328098 1527131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:02:23.331918 1527131 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:02:23.331997 1527131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:02:23.373256 1527131 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 16:02:23.381999 1527131 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 16:02:23.389673 1527131 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 16:02:23.397973 1527131 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 16:02:23.406099 1527131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 16:02:23.410027 1527131 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 16:02:23.410090 1527131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 16:02:23.453652 1527131 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 16:02:23.461102 1527131 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1252934.pem /etc/ssl/certs/51391683.0
	I1213 16:02:23.469641 1527131 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 16:02:23.473464 1527131 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 16:02:23.473520 1527131 kubeadm.go:401] StartCluster: {Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:02:23.473612 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 16:02:23.473675 1527131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 16:02:23.501906 1527131 cri.go:89] found id: ""
	I1213 16:02:23.501976 1527131 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 16:02:23.509856 1527131 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 16:02:23.517759 1527131 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 16:02:23.517824 1527131 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 16:02:23.525757 1527131 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 16:02:23.525778 1527131 kubeadm.go:158] found existing configuration files:
	
	I1213 16:02:23.525864 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 16:02:23.533675 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 16:02:23.533781 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 16:02:23.541421 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 16:02:23.549139 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 16:02:23.549209 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 16:02:23.556514 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 16:02:23.563859 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 16:02:23.563926 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 16:02:23.571345 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 16:02:23.578972 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 16:02:23.579034 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 16:02:23.588349 1527131 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 16:02:23.644568 1527131 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 16:02:23.644844 1527131 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 16:02:23.719501 1527131 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 16:02:23.719596 1527131 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 16:02:23.719638 1527131 kubeadm.go:319] OS: Linux
	I1213 16:02:23.719695 1527131 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 16:02:23.719756 1527131 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 16:02:23.719822 1527131 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 16:02:23.719885 1527131 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 16:02:23.719948 1527131 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 16:02:23.720014 1527131 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 16:02:23.720065 1527131 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 16:02:23.720126 1527131 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 16:02:23.720184 1527131 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 16:02:23.799280 1527131 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 16:02:23.799447 1527131 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 16:02:23.799586 1527131 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 16:02:23.813871 1527131 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 16:02:23.820586 1527131 out.go:252]   - Generating certificates and keys ...
	I1213 16:02:23.820722 1527131 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 16:02:23.820831 1527131 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 16:02:24.062915 1527131 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 16:02:24.119432 1527131 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 16:02:24.837877 1527131 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 16:02:25.323783 1527131 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 16:02:25.382177 1527131 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 16:02:25.382477 1527131 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-526531] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 16:02:25.533405 1527131 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 16:02:25.533842 1527131 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-526531] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 16:02:25.796805 1527131 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 16:02:25.975896 1527131 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 16:02:26.105650 1527131 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 16:02:26.105962 1527131 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 16:02:26.444172 1527131 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 16:02:26.939066 1527131 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 16:02:27.121431 1527131 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 16:02:27.579446 1527131 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 16:02:27.628725 1527131 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 16:02:27.629390 1527131 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 16:02:27.631991 1527131 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 16:02:27.635735 1527131 out.go:252]   - Booting up control plane ...
	I1213 16:02:27.635847 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 16:02:27.635926 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 16:02:27.635993 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 16:02:27.657055 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 16:02:27.657166 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 16:02:27.664926 1527131 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 16:02:27.665403 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 16:02:27.665639 1527131 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 16:02:27.803169 1527131 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 16:02:27.803302 1527131 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 16:02:41.545926 1500765 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 16:02:41.546108 1500765 kubeadm.go:319] 
	I1213 16:02:41.546236 1500765 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 16:02:41.551134 1500765 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 16:02:41.551190 1500765 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 16:02:41.551289 1500765 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 16:02:41.551373 1500765 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 16:02:41.551414 1500765 kubeadm.go:319] OS: Linux
	I1213 16:02:41.551459 1500765 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 16:02:41.551511 1500765 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 16:02:41.551561 1500765 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 16:02:41.551612 1500765 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 16:02:41.551663 1500765 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 16:02:41.551715 1500765 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 16:02:41.551764 1500765 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 16:02:41.551816 1500765 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 16:02:41.551866 1500765 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 16:02:41.551941 1500765 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 16:02:41.552042 1500765 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 16:02:41.552133 1500765 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 16:02:41.552199 1500765 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 16:02:41.555522 1500765 out.go:252]   - Generating certificates and keys ...
	I1213 16:02:41.555641 1500765 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 16:02:41.555717 1500765 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 16:02:41.555797 1500765 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 16:02:41.555873 1500765 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 16:02:41.555970 1500765 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 16:02:41.556031 1500765 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 16:02:41.556110 1500765 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 16:02:41.556213 1500765 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 16:02:41.556310 1500765 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 16:02:41.556431 1500765 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 16:02:41.556486 1500765 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 16:02:41.556559 1500765 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 16:02:41.556617 1500765 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 16:02:41.556678 1500765 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 16:02:41.556736 1500765 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 16:02:41.556817 1500765 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 16:02:41.556888 1500765 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 16:02:41.556980 1500765 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 16:02:41.557075 1500765 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 16:02:41.560042 1500765 out.go:252]   - Booting up control plane ...
	I1213 16:02:41.560143 1500765 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 16:02:41.560258 1500765 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 16:02:41.560348 1500765 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 16:02:41.560479 1500765 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 16:02:41.560588 1500765 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 16:02:41.560701 1500765 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 16:02:41.560824 1500765 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 16:02:41.560880 1500765 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 16:02:41.561017 1500765 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 16:02:41.561131 1500765 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 16:02:41.561233 1500765 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000293839s
	I1213 16:02:41.561265 1500765 kubeadm.go:319] 
	I1213 16:02:41.561329 1500765 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 16:02:41.561367 1500765 kubeadm.go:319] 	- The kubelet is not running
	I1213 16:02:41.561492 1500765 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 16:02:41.561506 1500765 kubeadm.go:319] 
	I1213 16:02:41.561630 1500765 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 16:02:41.561673 1500765 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 16:02:41.561708 1500765 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 16:02:41.561776 1500765 kubeadm.go:319] 
	I1213 16:02:41.561777 1500765 kubeadm.go:403] duration metric: took 8m8.131517099s to StartCluster
	I1213 16:02:41.561824 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:02:41.561903 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:02:41.594564 1500765 cri.go:89] found id: ""
	I1213 16:02:41.594594 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.594603 1500765 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:02:41.594609 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:02:41.594677 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:02:41.629231 1500765 cri.go:89] found id: ""
	I1213 16:02:41.629252 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.629260 1500765 logs.go:284] No container was found matching "etcd"
	I1213 16:02:41.629266 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:02:41.629322 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:02:41.656157 1500765 cri.go:89] found id: ""
	I1213 16:02:41.656181 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.656190 1500765 logs.go:284] No container was found matching "coredns"
	I1213 16:02:41.656196 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:02:41.656276 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:02:41.681173 1500765 cri.go:89] found id: ""
	I1213 16:02:41.681208 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.681217 1500765 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:02:41.681224 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:02:41.681308 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:02:41.708543 1500765 cri.go:89] found id: ""
	I1213 16:02:41.708568 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.708577 1500765 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:02:41.708583 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:02:41.708660 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:02:41.737039 1500765 cri.go:89] found id: ""
	I1213 16:02:41.737062 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.737071 1500765 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:02:41.737079 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:02:41.737137 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:02:41.762249 1500765 cri.go:89] found id: ""
	I1213 16:02:41.762275 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.762283 1500765 logs.go:284] No container was found matching "kindnet"
	I1213 16:02:41.762294 1500765 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:02:41.762306 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:02:41.828774 1500765 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:02:41.820905    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.821458    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.823177    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.823750    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.825347    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:02:41.820905    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.821458    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.823177    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.823750    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.825347    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:02:41.828797 1500765 logs.go:123] Gathering logs for containerd ...
	I1213 16:02:41.828810 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:02:41.870479 1500765 logs.go:123] Gathering logs for container status ...
	I1213 16:02:41.870512 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:02:41.897347 1500765 logs.go:123] Gathering logs for kubelet ...
	I1213 16:02:41.897374 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:02:41.954515 1500765 logs.go:123] Gathering logs for dmesg ...
	I1213 16:02:41.954549 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 16:02:41.971648 1500765 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000293839s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 16:02:41.971703 1500765 out.go:285] * 
	W1213 16:02:41.971970 1500765 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000293839s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 16:02:41.971990 1500765 out.go:285] * 
	W1213 16:02:41.974206 1500765 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 16:02:41.979727 1500765 out.go:203] 
	W1213 16:02:41.982586 1500765 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000293839s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 16:02:41.982624 1500765 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 16:02:41.982645 1500765 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 16:02:41.985873 1500765 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 15:54:23 no-preload-439544 containerd[760]: time="2025-12-13T15:54:23.378148111Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:24 no-preload-439544 containerd[760]: time="2025-12-13T15:54:24.600685306Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 13 15:54:24 no-preload-439544 containerd[760]: time="2025-12-13T15:54:24.603732906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 13 15:54:24 no-preload-439544 containerd[760]: time="2025-12-13T15:54:24.611915029Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:24 no-preload-439544 containerd[760]: time="2025-12-13T15:54:24.613116551Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:25 no-preload-439544 containerd[760]: time="2025-12-13T15:54:25.503138610Z" level=info msg="No images store for sha256:84ea4651cf4d4486006d1346129c6964687be99508987d0ca606406fbc15a298"
	Dec 13 15:54:25 no-preload-439544 containerd[760]: time="2025-12-13T15:54:25.506879683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\""
	Dec 13 15:54:25 no-preload-439544 containerd[760]: time="2025-12-13T15:54:25.528281020Z" level=info msg="ImageCreate event name:\"sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:25 no-preload-439544 containerd[760]: time="2025-12-13T15:54:25.529509930Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:27 no-preload-439544 containerd[760]: time="2025-12-13T15:54:27.056611379Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 13 15:54:27 no-preload-439544 containerd[760]: time="2025-12-13T15:54:27.059970700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 13 15:54:27 no-preload-439544 containerd[760]: time="2025-12-13T15:54:27.072962113Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:27 no-preload-439544 containerd[760]: time="2025-12-13T15:54:27.074433027Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:28 no-preload-439544 containerd[760]: time="2025-12-13T15:54:28.221784082Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 13 15:54:28 no-preload-439544 containerd[760]: time="2025-12-13T15:54:28.224970821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 13 15:54:28 no-preload-439544 containerd[760]: time="2025-12-13T15:54:28.232633350Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:28 no-preload-439544 containerd[760]: time="2025-12-13T15:54:28.233266000Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.393544387Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.395762984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.407681609Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.408407697Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.791409724Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.793787530Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.800749932Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.801079615Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:02:43.107106    5564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:43.107939    5564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:43.109712    5564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:43.110477    5564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:43.112177    5564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.759368] overlayfs: idmapped layers are currently not supported
	[Dec13 13:37] overlayfs: idmapped layers are currently not supported
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 16:02:43 up  7:45,  0 user,  load average: 1.09, 1.67, 1.83
	Linux no-preload-439544 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 16:02:40 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:02:40 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 13 16:02:40 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:02:40 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:02:40 no-preload-439544 kubelet[5376]: E1213 16:02:40.878971    5376 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:02:40 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:02:40 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:02:41 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 13 16:02:41 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:02:41 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:02:41 no-preload-439544 kubelet[5387]: E1213 16:02:41.642320    5387 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:02:41 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:02:41 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:02:42 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 13 16:02:42 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:02:42 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:02:42 no-preload-439544 kubelet[5473]: E1213 16:02:42.388098    5473 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:02:42 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:02:42 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:02:43 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 13 16:02:43 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:02:43 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:02:43 no-preload-439544 kubelet[5569]: E1213 16:02:43.152083    5569 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:02:43 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:02:43 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-439544 -n no-preload-439544
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-439544 -n no-preload-439544: exit status 6 (318.280261ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 16:02:43.535952 1529545 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-439544" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-439544" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (512.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (501.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-526531 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1213 16:02:37.393913 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/old-k8s-version-912710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-526531 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m19.813641823s)

                                                
                                                
-- stdout --
	* [newest-cni-526531] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "newest-cni-526531" primary control-plane node in "newest-cni-526531" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 16:02:10.653265 1527131 out.go:360] Setting OutFile to fd 1 ...
	I1213 16:02:10.653450 1527131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:02:10.653463 1527131 out.go:374] Setting ErrFile to fd 2...
	I1213 16:02:10.653469 1527131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:02:10.653723 1527131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 16:02:10.654178 1527131 out.go:368] Setting JSON to false
	I1213 16:02:10.655121 1527131 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":27880,"bootTime":1765613851,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 16:02:10.655187 1527131 start.go:143] virtualization:  
	I1213 16:02:10.659173 1527131 out.go:179] * [newest-cni-526531] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 16:02:10.663186 1527131 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 16:02:10.663301 1527131 notify.go:221] Checking for updates...
	I1213 16:02:10.669662 1527131 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 16:02:10.672735 1527131 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:02:10.675695 1527131 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 16:02:10.678798 1527131 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 16:02:10.681784 1527131 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 16:02:10.685234 1527131 config.go:182] Loaded profile config "no-preload-439544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:02:10.685327 1527131 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 16:02:10.712873 1527131 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 16:02:10.712998 1527131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:02:10.776591 1527131 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:02:10.767542878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:02:10.776698 1527131 docker.go:319] overlay module found
	I1213 16:02:10.779851 1527131 out.go:179] * Using the docker driver based on user configuration
	I1213 16:02:10.782749 1527131 start.go:309] selected driver: docker
	I1213 16:02:10.782766 1527131 start.go:927] validating driver "docker" against <nil>
	I1213 16:02:10.782781 1527131 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 16:02:10.783532 1527131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:02:10.836394 1527131 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:02:10.826578222 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:02:10.836552 1527131 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 16:02:10.836580 1527131 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 16:02:10.836798 1527131 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 16:02:10.839799 1527131 out.go:179] * Using Docker driver with root privileges
	I1213 16:02:10.842710 1527131 cni.go:84] Creating CNI manager for ""
	I1213 16:02:10.842780 1527131 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:02:10.842796 1527131 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 16:02:10.842882 1527131 start.go:353] cluster config:
	{Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:02:10.846082 1527131 out.go:179] * Starting "newest-cni-526531" primary control-plane node in "newest-cni-526531" cluster
	I1213 16:02:10.848967 1527131 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 16:02:10.851950 1527131 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 16:02:10.854779 1527131 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:02:10.854844 1527131 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 16:02:10.854855 1527131 cache.go:65] Caching tarball of preloaded images
	I1213 16:02:10.854853 1527131 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 16:02:10.854953 1527131 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 16:02:10.854964 1527131 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 16:02:10.855092 1527131 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/config.json ...
	I1213 16:02:10.855111 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/config.json: {Name:mk86a24d01142c8f16a845d4170f48ade207872d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:10.882520 1527131 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 16:02:10.882541 1527131 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 16:02:10.882562 1527131 cache.go:243] Successfully downloaded all kic artifacts
	I1213 16:02:10.882591 1527131 start.go:360] acquireMachinesLock for newest-cni-526531: {Name:mk8328ad899404812480e264edd6e13cbbd26230 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:02:10.883398 1527131 start.go:364] duration metric: took 789.437µs to acquireMachinesLock for "newest-cni-526531"
	I1213 16:02:10.883434 1527131 start.go:93] Provisioning new machine with config: &{Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 16:02:10.883509 1527131 start.go:125] createHost starting for "" (driver="docker")
	I1213 16:02:10.886860 1527131 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 16:02:10.887084 1527131 start.go:159] libmachine.API.Create for "newest-cni-526531" (driver="docker")
	I1213 16:02:10.887118 1527131 client.go:173] LocalClient.Create starting
	I1213 16:02:10.887190 1527131 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem
	I1213 16:02:10.887231 1527131 main.go:143] libmachine: Decoding PEM data...
	I1213 16:02:10.887246 1527131 main.go:143] libmachine: Parsing certificate...
	I1213 16:02:10.887296 1527131 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem
	I1213 16:02:10.887414 1527131 main.go:143] libmachine: Decoding PEM data...
	I1213 16:02:10.887431 1527131 main.go:143] libmachine: Parsing certificate...
	I1213 16:02:10.887816 1527131 cli_runner.go:164] Run: docker network inspect newest-cni-526531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 16:02:10.908607 1527131 cli_runner.go:211] docker network inspect newest-cni-526531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 16:02:10.908685 1527131 network_create.go:284] running [docker network inspect newest-cni-526531] to gather additional debugging logs...
	I1213 16:02:10.908709 1527131 cli_runner.go:164] Run: docker network inspect newest-cni-526531
	W1213 16:02:10.924665 1527131 cli_runner.go:211] docker network inspect newest-cni-526531 returned with exit code 1
	I1213 16:02:10.924698 1527131 network_create.go:287] error running [docker network inspect newest-cni-526531]: docker network inspect newest-cni-526531: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-526531 not found
	I1213 16:02:10.924713 1527131 network_create.go:289] output of [docker network inspect newest-cni-526531]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-526531 not found
	
	** /stderr **
	I1213 16:02:10.924834 1527131 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 16:02:10.945123 1527131 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2ccf9a9eb2c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:9e:64:ed:f4:f2} reservation:<nil>}
	I1213 16:02:10.945400 1527131 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f2add06a95dc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:3f:19:f0:2f:b1} reservation:<nil>}
	I1213 16:02:10.945650 1527131 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8517ffe6861d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:d6:d4:51:8b:3d} reservation:<nil>}
	I1213 16:02:10.946092 1527131 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a39030}
	I1213 16:02:10.946118 1527131 network_create.go:124] attempt to create docker network newest-cni-526531 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 16:02:10.946180 1527131 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-526531 newest-cni-526531
	I1213 16:02:11.005690 1527131 network_create.go:108] docker network newest-cni-526531 192.168.76.0/24 created
	I1213 16:02:11.005737 1527131 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-526531" container
	I1213 16:02:11.005844 1527131 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 16:02:11.023684 1527131 cli_runner.go:164] Run: docker volume create newest-cni-526531 --label name.minikube.sigs.k8s.io=newest-cni-526531 --label created_by.minikube.sigs.k8s.io=true
	I1213 16:02:11.043087 1527131 oci.go:103] Successfully created a docker volume newest-cni-526531
	I1213 16:02:11.043189 1527131 cli_runner.go:164] Run: docker run --rm --name newest-cni-526531-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-526531 --entrypoint /usr/bin/test -v newest-cni-526531:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 16:02:11.614357 1527131 oci.go:107] Successfully prepared a docker volume newest-cni-526531
	I1213 16:02:11.614420 1527131 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:02:11.614431 1527131 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 16:02:11.614506 1527131 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-526531:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 16:02:15.477407 1527131 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-526531:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.862862091s)
	I1213 16:02:15.477459 1527131 kic.go:203] duration metric: took 3.863024311s to extract preloaded images to volume ...
	W1213 16:02:15.477597 1527131 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 16:02:15.477708 1527131 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 16:02:15.532223 1527131 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-526531 --name newest-cni-526531 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-526531 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-526531 --network newest-cni-526531 --ip 192.168.76.2 --volume newest-cni-526531:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 16:02:15.845102 1527131 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Running}}
	I1213 16:02:15.866861 1527131 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:02:15.892916 1527131 cli_runner.go:164] Run: docker exec newest-cni-526531 stat /var/lib/dpkg/alternatives/iptables
	I1213 16:02:15.948563 1527131 oci.go:144] the created container "newest-cni-526531" has a running status.
	I1213 16:02:15.948590 1527131 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa...
	I1213 16:02:16.266786 1527131 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 16:02:16.296564 1527131 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:02:16.329593 1527131 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 16:02:16.329619 1527131 kic_runner.go:114] Args: [docker exec --privileged newest-cni-526531 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 16:02:16.396781 1527131 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:02:16.416507 1527131 machine.go:94] provisionDockerMachine start ...
	I1213 16:02:16.416610 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:16.437096 1527131 main.go:143] libmachine: Using SSH client type: native
	I1213 16:02:16.437445 1527131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34223 <nil> <nil>}
	I1213 16:02:16.437455 1527131 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 16:02:16.438031 1527131 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45440->127.0.0.1:34223: read: connection reset by peer
	I1213 16:02:19.590785 1527131 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-526531
	
	I1213 16:02:19.590808 1527131 ubuntu.go:182] provisioning hostname "newest-cni-526531"
	I1213 16:02:19.590880 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:19.609205 1527131 main.go:143] libmachine: Using SSH client type: native
	I1213 16:02:19.609519 1527131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34223 <nil> <nil>}
	I1213 16:02:19.609531 1527131 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-526531 && echo "newest-cni-526531" | sudo tee /etc/hostname
	I1213 16:02:19.768653 1527131 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-526531
	
	I1213 16:02:19.768776 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:19.785859 1527131 main.go:143] libmachine: Using SSH client type: native
	I1213 16:02:19.786173 1527131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34223 <nil> <nil>}
	I1213 16:02:19.786190 1527131 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-526531' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-526531/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-526531' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 16:02:19.943619 1527131 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 16:02:19.943646 1527131 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 16:02:19.943683 1527131 ubuntu.go:190] setting up certificates
	I1213 16:02:19.943694 1527131 provision.go:84] configureAuth start
	I1213 16:02:19.943767 1527131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:02:19.960971 1527131 provision.go:143] copyHostCerts
	I1213 16:02:19.961044 1527131 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 16:02:19.961058 1527131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 16:02:19.961139 1527131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 16:02:19.961239 1527131 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 16:02:19.961249 1527131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 16:02:19.961277 1527131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 16:02:19.961346 1527131 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 16:02:19.961355 1527131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 16:02:19.961380 1527131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 16:02:19.961441 1527131 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.newest-cni-526531 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-526531]
	I1213 16:02:20.054612 1527131 provision.go:177] copyRemoteCerts
	I1213 16:02:20.054686 1527131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 16:02:20.054736 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.072851 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.179668 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 16:02:20.198845 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 16:02:20.217676 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 16:02:20.236010 1527131 provision.go:87] duration metric: took 292.302594ms to configureAuth
	I1213 16:02:20.236050 1527131 ubuntu.go:206] setting minikube options for container-runtime
	I1213 16:02:20.236287 1527131 config.go:182] Loaded profile config "newest-cni-526531": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:02:20.236298 1527131 machine.go:97] duration metric: took 3.819772251s to provisionDockerMachine
	I1213 16:02:20.236311 1527131 client.go:176] duration metric: took 9.349180869s to LocalClient.Create
	I1213 16:02:20.236333 1527131 start.go:167] duration metric: took 9.349249118s to libmachine.API.Create "newest-cni-526531"
	I1213 16:02:20.236344 1527131 start.go:293] postStartSetup for "newest-cni-526531" (driver="docker")
	I1213 16:02:20.236355 1527131 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 16:02:20.236412 1527131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 16:02:20.236459 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.253931 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.359511 1527131 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 16:02:20.363075 1527131 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 16:02:20.363102 1527131 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 16:02:20.363114 1527131 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 16:02:20.363170 1527131 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 16:02:20.363253 1527131 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 16:02:20.363383 1527131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 16:02:20.370977 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:02:20.388811 1527131 start.go:296] duration metric: took 152.451817ms for postStartSetup
	I1213 16:02:20.389184 1527131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:02:20.406647 1527131 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/config.json ...
	I1213 16:02:20.406930 1527131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 16:02:20.406975 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.424459 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.529476 1527131 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 16:02:20.539030 1527131 start.go:128] duration metric: took 9.655490819s to createHost
	I1213 16:02:20.539056 1527131 start.go:83] releasing machines lock for "newest-cni-526531", held for 9.655642684s
	I1213 16:02:20.539196 1527131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:02:20.566091 1527131 ssh_runner.go:195] Run: cat /version.json
	I1213 16:02:20.566128 1527131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 16:02:20.566142 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.566184 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.588830 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.608973 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.799431 1527131 ssh_runner.go:195] Run: systemctl --version
	I1213 16:02:20.806227 1527131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 16:02:20.810716 1527131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 16:02:20.810789 1527131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 16:02:20.839037 1527131 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 16:02:20.839104 1527131 start.go:496] detecting cgroup driver to use...
	I1213 16:02:20.839151 1527131 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 16:02:20.839236 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 16:02:20.854464 1527131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 16:02:20.867574 1527131 docker.go:218] disabling cri-docker service (if available) ...
	I1213 16:02:20.867669 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 16:02:20.885257 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 16:02:20.903596 1527131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 16:02:21.022899 1527131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 16:02:21.152487 1527131 docker.go:234] disabling docker service ...
	I1213 16:02:21.152550 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 16:02:21.174727 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 16:02:21.188382 1527131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 16:02:21.299657 1527131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 16:02:21.434130 1527131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 16:02:21.446805 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 16:02:21.461400 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 16:02:21.470517 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 16:02:21.479694 1527131 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 16:02:21.479759 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 16:02:21.494124 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:02:21.502957 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 16:02:21.512551 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:02:21.521611 1527131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 16:02:21.530083 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 16:02:21.539325 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 16:02:21.548742 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 16:02:21.557617 1527131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 16:02:21.565268 1527131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 16:02:21.572714 1527131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:02:21.683769 1527131 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 16:02:21.823560 1527131 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 16:02:21.823710 1527131 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 16:02:21.827515 1527131 start.go:564] Will wait 60s for crictl version
	I1213 16:02:21.827583 1527131 ssh_runner.go:195] Run: which crictl
	I1213 16:02:21.831175 1527131 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 16:02:21.854565 1527131 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 16:02:21.854637 1527131 ssh_runner.go:195] Run: containerd --version
	I1213 16:02:21.878720 1527131 ssh_runner.go:195] Run: containerd --version
	I1213 16:02:21.901809 1527131 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 16:02:21.904695 1527131 cli_runner.go:164] Run: docker network inspect newest-cni-526531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 16:02:21.920670 1527131 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 16:02:21.924637 1527131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:02:21.937646 1527131 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 16:02:21.940537 1527131 kubeadm.go:884] updating cluster {Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 16:02:21.940697 1527131 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:02:21.940787 1527131 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:02:21.972241 1527131 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:02:21.972268 1527131 containerd.go:534] Images already preloaded, skipping extraction
	I1213 16:02:21.972335 1527131 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:02:22.011228 1527131 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:02:22.011254 1527131 cache_images.go:86] Images are preloaded, skipping loading
	I1213 16:02:22.011263 1527131 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 16:02:22.011415 1527131 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-526531 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 16:02:22.011503 1527131 ssh_runner.go:195] Run: sudo crictl info
	I1213 16:02:22.037059 1527131 cni.go:84] Creating CNI manager for ""
	I1213 16:02:22.037085 1527131 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:02:22.037100 1527131 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 16:02:22.037123 1527131 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-526531 NodeName:newest-cni-526531 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 16:02:22.037245 1527131 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-526531"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 16:02:22.037324 1527131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 16:02:22.045616 1527131 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 16:02:22.045746 1527131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 16:02:22.054164 1527131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 16:02:22.068023 1527131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 16:02:22.085623 1527131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 16:02:22.101118 1527131 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 16:02:22.105257 1527131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:02:22.115696 1527131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:02:22.236674 1527131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:02:22.253725 1527131 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531 for IP: 192.168.76.2
	I1213 16:02:22.253801 1527131 certs.go:195] generating shared ca certs ...
	I1213 16:02:22.253832 1527131 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.254016 1527131 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 16:02:22.254124 1527131 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 16:02:22.254153 1527131 certs.go:257] generating profile certs ...
	I1213 16:02:22.254236 1527131 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.key
	I1213 16:02:22.254267 1527131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.crt with IP's: []
	I1213 16:02:22.746862 1527131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.crt ...
	I1213 16:02:22.746902 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.crt: {Name:mk7b618219326f9fba540570e126db6afef7db97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.747100 1527131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.key ...
	I1213 16:02:22.747113 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.key: {Name:mkadefb7fb5fbcd2154d988162829a52daab8655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.747208 1527131 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key.b007e6c7
	I1213 16:02:22.747225 1527131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt.b007e6c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1213 16:02:22.809461 1527131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt.b007e6c7 ...
	I1213 16:02:22.809493 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt.b007e6c7: {Name:mkce6931933926d60edd03298cb3538c188eea65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.809651 1527131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key.b007e6c7 ...
	I1213 16:02:22.809660 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key.b007e6c7: {Name:mk5267764b911bf176ac97c9b4dd7d199f6b5ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.809731 1527131 certs.go:382] copying /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt.b007e6c7 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt
	I1213 16:02:22.809817 1527131 certs.go:386] copying /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key.b007e6c7 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key
	I1213 16:02:22.809875 1527131 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key
	I1213 16:02:22.809898 1527131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.crt with IP's: []
	I1213 16:02:23.001038 1527131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.crt ...
	I1213 16:02:23.001077 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.crt: {Name:mk387ba28125d038f533411623a4bd220070ddcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:23.002037 1527131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key ...
	I1213 16:02:23.002079 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key: {Name:mk1a039510f32e55e5dd18d9c94a59fef628608a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:23.002321 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 16:02:23.002370 1527131 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 16:02:23.002380 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 16:02:23.002408 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 16:02:23.002444 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 16:02:23.002470 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 16:02:23.002520 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:02:23.003157 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 16:02:23.024481 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 16:02:23.042947 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 16:02:23.062246 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 16:02:23.080909 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 16:02:23.101609 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 16:02:23.121532 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 16:02:23.141397 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1213 16:02:23.162222 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 16:02:23.180800 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 16:02:23.199086 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 16:02:23.216531 1527131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 16:02:23.229620 1527131 ssh_runner.go:195] Run: openssl version
	I1213 16:02:23.236222 1527131 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 16:02:23.244051 1527131 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 16:02:23.251982 1527131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 16:02:23.255821 1527131 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 16:02:23.255903 1527131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 16:02:23.297335 1527131 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 16:02:23.305087 1527131 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12529342.pem /etc/ssl/certs/3ec20f2e.0
	I1213 16:02:23.312878 1527131 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:02:23.320527 1527131 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 16:02:23.328098 1527131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:02:23.331918 1527131 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:02:23.331997 1527131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:02:23.373256 1527131 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 16:02:23.381999 1527131 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 16:02:23.389673 1527131 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 16:02:23.397973 1527131 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 16:02:23.406099 1527131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 16:02:23.410027 1527131 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 16:02:23.410090 1527131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 16:02:23.453652 1527131 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 16:02:23.461102 1527131 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1252934.pem /etc/ssl/certs/51391683.0
	I1213 16:02:23.469641 1527131 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 16:02:23.473464 1527131 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 16:02:23.473520 1527131 kubeadm.go:401] StartCluster: {Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:02:23.473612 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 16:02:23.473675 1527131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 16:02:23.501906 1527131 cri.go:89] found id: ""
	I1213 16:02:23.501976 1527131 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 16:02:23.509856 1527131 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 16:02:23.517759 1527131 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 16:02:23.517824 1527131 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 16:02:23.525757 1527131 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 16:02:23.525778 1527131 kubeadm.go:158] found existing configuration files:
	
	I1213 16:02:23.525864 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 16:02:23.533675 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 16:02:23.533781 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 16:02:23.541421 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 16:02:23.549139 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 16:02:23.549209 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 16:02:23.556514 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 16:02:23.563859 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 16:02:23.563926 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 16:02:23.571345 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 16:02:23.578972 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 16:02:23.579034 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 16:02:23.588349 1527131 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 16:02:23.644568 1527131 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 16:02:23.644844 1527131 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 16:02:23.719501 1527131 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 16:02:23.719596 1527131 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 16:02:23.719638 1527131 kubeadm.go:319] OS: Linux
	I1213 16:02:23.719695 1527131 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 16:02:23.719756 1527131 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 16:02:23.719822 1527131 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 16:02:23.719885 1527131 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 16:02:23.719948 1527131 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 16:02:23.720014 1527131 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 16:02:23.720065 1527131 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 16:02:23.720126 1527131 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 16:02:23.720184 1527131 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 16:02:23.799280 1527131 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 16:02:23.799447 1527131 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 16:02:23.799586 1527131 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 16:02:23.813871 1527131 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 16:02:23.820586 1527131 out.go:252]   - Generating certificates and keys ...
	I1213 16:02:23.820722 1527131 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 16:02:23.820831 1527131 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 16:02:24.062915 1527131 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 16:02:24.119432 1527131 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 16:02:24.837877 1527131 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 16:02:25.323783 1527131 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 16:02:25.382177 1527131 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 16:02:25.382477 1527131 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-526531] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 16:02:25.533405 1527131 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 16:02:25.533842 1527131 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-526531] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 16:02:25.796805 1527131 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 16:02:25.975896 1527131 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 16:02:26.105650 1527131 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 16:02:26.105962 1527131 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 16:02:26.444172 1527131 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 16:02:26.939066 1527131 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 16:02:27.121431 1527131 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 16:02:27.579446 1527131 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 16:02:27.628725 1527131 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 16:02:27.629390 1527131 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 16:02:27.631991 1527131 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 16:02:27.635735 1527131 out.go:252]   - Booting up control plane ...
	I1213 16:02:27.635847 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 16:02:27.635926 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 16:02:27.635993 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 16:02:27.657055 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 16:02:27.657166 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 16:02:27.664926 1527131 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 16:02:27.665403 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 16:02:27.665639 1527131 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 16:02:27.803169 1527131 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 16:02:27.803302 1527131 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 16:06:27.802892 1527131 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001194483s
	I1213 16:06:27.802923 1527131 kubeadm.go:319] 
	I1213 16:06:27.803273 1527131 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 16:06:27.803399 1527131 kubeadm.go:319] 	- The kubelet is not running
	I1213 16:06:27.803765 1527131 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 16:06:27.803775 1527131 kubeadm.go:319] 
	I1213 16:06:27.803981 1527131 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 16:06:27.804042 1527131 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 16:06:27.804098 1527131 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 16:06:27.804106 1527131 kubeadm.go:319] 
	I1213 16:06:27.809079 1527131 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 16:06:27.809540 1527131 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 16:06:27.809697 1527131 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 16:06:27.810128 1527131 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 16:06:27.810147 1527131 kubeadm.go:319] 
	I1213 16:06:27.810227 1527131 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 16:06:27.810425 1527131 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-526531] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-526531] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001194483s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-526531] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-526531] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001194483s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 16:06:27.810556 1527131 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 16:06:28.218967 1527131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 16:06:28.233104 1527131 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 16:06:28.233179 1527131 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 16:06:28.241250 1527131 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 16:06:28.241272 1527131 kubeadm.go:158] found existing configuration files:
	
	I1213 16:06:28.241325 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 16:06:28.249399 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 16:06:28.249464 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 16:06:28.257096 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 16:06:28.265010 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 16:06:28.265075 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 16:06:28.273325 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 16:06:28.281364 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 16:06:28.281443 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 16:06:28.289177 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 16:06:28.297335 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 16:06:28.297406 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 16:06:28.305336 1527131 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 16:06:28.346459 1527131 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 16:06:28.346706 1527131 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 16:06:28.412526 1527131 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 16:06:28.412656 1527131 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 16:06:28.412720 1527131 kubeadm.go:319] OS: Linux
	I1213 16:06:28.412796 1527131 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 16:06:28.412874 1527131 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 16:06:28.412953 1527131 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 16:06:28.413023 1527131 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 16:06:28.413091 1527131 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 16:06:28.413171 1527131 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 16:06:28.413247 1527131 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 16:06:28.413330 1527131 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 16:06:28.413409 1527131 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 16:06:28.487502 1527131 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 16:06:28.487768 1527131 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 16:06:28.487886 1527131 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 16:06:28.493209 1527131 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 16:06:28.498603 1527131 out.go:252]   - Generating certificates and keys ...
	I1213 16:06:28.498777 1527131 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 16:06:28.498875 1527131 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 16:06:28.498987 1527131 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 16:06:28.499079 1527131 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 16:06:28.499178 1527131 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 16:06:28.499261 1527131 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 16:06:28.499387 1527131 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 16:06:28.499489 1527131 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 16:06:28.499597 1527131 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 16:06:28.499699 1527131 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 16:06:28.499765 1527131 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 16:06:28.499849 1527131 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 16:06:28.647459 1527131 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 16:06:28.854581 1527131 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 16:06:29.198188 1527131 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 16:06:29.369603 1527131 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 16:06:29.759796 1527131 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 16:06:29.760686 1527131 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 16:06:29.763405 1527131 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 16:06:29.766742 1527131 out.go:252]   - Booting up control plane ...
	I1213 16:06:29.766921 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 16:06:29.767060 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 16:06:29.767160 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 16:06:29.788844 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 16:06:29.789113 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 16:06:29.796997 1527131 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 16:06:29.797476 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 16:06:29.797700 1527131 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 16:06:29.934060 1527131 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 16:06:29.934180 1527131 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 16:10:29.934181 1527131 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000175102s
	I1213 16:10:29.934219 1527131 kubeadm.go:319] 
	I1213 16:10:29.934278 1527131 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 16:10:29.934315 1527131 kubeadm.go:319] 	- The kubelet is not running
	I1213 16:10:29.934420 1527131 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 16:10:29.934431 1527131 kubeadm.go:319] 
	I1213 16:10:29.934571 1527131 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 16:10:29.934616 1527131 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 16:10:29.934646 1527131 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 16:10:29.934653 1527131 kubeadm.go:319] 
	I1213 16:10:29.939000 1527131 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 16:10:29.939475 1527131 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 16:10:29.939605 1527131 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 16:10:29.939919 1527131 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 16:10:29.939941 1527131 kubeadm.go:319] 
	I1213 16:10:29.940021 1527131 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 16:10:29.940103 1527131 kubeadm.go:403] duration metric: took 8m6.466581637s to StartCluster
	I1213 16:10:29.940140 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:10:29.940207 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:10:29.965453 1527131 cri.go:89] found id: ""
	I1213 16:10:29.965477 1527131 logs.go:282] 0 containers: []
	W1213 16:10:29.965487 1527131 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:10:29.965493 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:10:29.965556 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:10:29.991522 1527131 cri.go:89] found id: ""
	I1213 16:10:29.991547 1527131 logs.go:282] 0 containers: []
	W1213 16:10:29.991560 1527131 logs.go:284] No container was found matching "etcd"
	I1213 16:10:29.991566 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:10:29.991628 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:10:30.032969 1527131 cri.go:89] found id: ""
	I1213 16:10:30.032993 1527131 logs.go:282] 0 containers: []
	W1213 16:10:30.033002 1527131 logs.go:284] No container was found matching "coredns"
	I1213 16:10:30.033008 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:10:30.033087 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:10:30.086903 1527131 cri.go:89] found id: ""
	I1213 16:10:30.086929 1527131 logs.go:282] 0 containers: []
	W1213 16:10:30.086937 1527131 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:10:30.086944 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:10:30.087018 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:10:30.120054 1527131 cri.go:89] found id: ""
	I1213 16:10:30.120085 1527131 logs.go:282] 0 containers: []
	W1213 16:10:30.120097 1527131 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:10:30.120106 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:10:30.120179 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:10:30.147481 1527131 cri.go:89] found id: ""
	I1213 16:10:30.147512 1527131 logs.go:282] 0 containers: []
	W1213 16:10:30.147521 1527131 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:10:30.147528 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:10:30.147597 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:10:30.175161 1527131 cri.go:89] found id: ""
	I1213 16:10:30.175192 1527131 logs.go:282] 0 containers: []
	W1213 16:10:30.175202 1527131 logs.go:284] No container was found matching "kindnet"
	I1213 16:10:30.175212 1527131 logs.go:123] Gathering logs for kubelet ...
	I1213 16:10:30.175227 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:10:30.236323 1527131 logs.go:123] Gathering logs for dmesg ...
	I1213 16:10:30.236366 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:10:30.252852 1527131 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:10:30.252882 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:10:30.323930 1527131 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:10:30.308479    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.309175    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.310915    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.318127    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.318780    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:10:30.308479    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.309175    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.310915    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.318127    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.318780    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:10:30.323954 1527131 logs.go:123] Gathering logs for containerd ...
	I1213 16:10:30.323966 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:10:30.363277 1527131 logs.go:123] Gathering logs for container status ...
	I1213 16:10:30.363323 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 16:10:30.390658 1527131 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000175102s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 16:10:30.390707 1527131 out.go:285] * 
	* 
	W1213 16:10:30.390758 1527131 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000175102s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000175102s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 16:10:30.390773 1527131 out.go:285] * 
	* 
	W1213 16:10:30.392934 1527131 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 16:10:30.397735 1527131 out.go:203] 
	W1213 16:10:30.401437 1527131 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000175102s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000175102s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 16:10:30.401483 1527131 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 16:10:30.401510 1527131 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 16:10:30.404721 1527131 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p newest-cni-526531 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-526531
helpers_test.go:244: (dbg) docker inspect newest-cni-526531:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54",
	        "Created": "2025-12-13T16:02:15.548035148Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1527552,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T16:02:15.61154228Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54/hostname",
	        "HostsPath": "/var/lib/docker/containers/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54/hosts",
	        "LogPath": "/var/lib/docker/containers/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54-json.log",
	        "Name": "/newest-cni-526531",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-526531:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-526531",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54",
	                "LowerDir": "/var/lib/docker/overlay2/3024c4b0e975ebb8657bae012179f06255ab85cd84ba26121505b6c54622bc5d-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3024c4b0e975ebb8657bae012179f06255ab85cd84ba26121505b6c54622bc5d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3024c4b0e975ebb8657bae012179f06255ab85cd84ba26121505b6c54622bc5d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3024c4b0e975ebb8657bae012179f06255ab85cd84ba26121505b6c54622bc5d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-526531",
	                "Source": "/var/lib/docker/volumes/newest-cni-526531/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-526531",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-526531",
	                "name.minikube.sigs.k8s.io": "newest-cni-526531",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4bfa296b8ce5b9a9521ebc2c98193f9318423ba22bf82448755a60c700c13c19",
	            "SandboxKey": "/var/run/docker/netns/4bfa296b8ce5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34223"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34224"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34227"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34225"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34226"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-526531": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:63:98:58:f5:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ae0d89b977ec0aa4cc17943d84decbf5f3cf47ff39573e4d4fdb9e9873e2828c",
	                    "EndpointID": "f95fa4c05c60c14b35da98f9b531c20fc8d91ab1572e72ada9f86ed1f99d4e1e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-526531",
	                        "dd2af60ccebf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-526531 -n newest-cni-526531
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-526531 -n newest-cni-526531: exit status 6 (362.287881ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 16:10:30.837665 1539561 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-526531" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-526531 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ addons  │ enable metrics-server -p embed-certs-270324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:57 UTC │ 13 Dec 25 15:57 UTC │
	│ stop    │ -p embed-certs-270324 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:57 UTC │ 13 Dec 25 15:58 UTC │
	│ addons  │ enable dashboard -p embed-certs-270324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:58 UTC │ 13 Dec 25 15:58 UTC │
	│ start   │ -p embed-certs-270324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:58 UTC │ 13 Dec 25 15:58 UTC │
	│ image   │ embed-certs-270324 image list --format=json                                                                                                                                                                                                                │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ pause   │ -p embed-certs-270324 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ unpause │ -p embed-certs-270324 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p embed-certs-270324                                                                                                                                                                                                                                      │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p embed-certs-270324                                                                                                                                                                                                                                      │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p disable-driver-mounts-614298                                                                                                                                                                                                                            │ disable-driver-mounts-614298 │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ start   │ -p default-k8s-diff-port-946932 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 16:00 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-946932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:00 UTC │ 13 Dec 25 16:00 UTC │
	│ stop    │ -p default-k8s-diff-port-946932 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:00 UTC │ 13 Dec 25 16:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-946932 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:01 UTC │ 13 Dec 25 16:01 UTC │
	│ start   │ -p default-k8s-diff-port-946932 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:01 UTC │ 13 Dec 25 16:01 UTC │
	│ image   │ default-k8s-diff-port-946932 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ pause   │ -p default-k8s-diff-port-946932 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ unpause │ -p default-k8s-diff-port-946932 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ delete  │ -p default-k8s-diff-port-946932                                                                                                                                                                                                                            │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ delete  │ -p default-k8s-diff-port-946932                                                                                                                                                                                                                            │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ start   │ -p newest-cni-526531 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-439544 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │                     │
	│ stop    │ -p no-preload-439544 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:04 UTC │ 13 Dec 25 16:04 UTC │
	│ addons  │ enable dashboard -p no-preload-439544 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:04 UTC │ 13 Dec 25 16:04 UTC │
	│ start   │ -p no-preload-439544 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:04 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 16:04:42
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 16:04:42.413194 1532633 out.go:360] Setting OutFile to fd 1 ...
	I1213 16:04:42.413307 1532633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:04:42.413317 1532633 out.go:374] Setting ErrFile to fd 2...
	I1213 16:04:42.413323 1532633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:04:42.413567 1532633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 16:04:42.413904 1532633 out.go:368] Setting JSON to false
	I1213 16:04:42.414786 1532633 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":28031,"bootTime":1765613851,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 16:04:42.414858 1532633 start.go:143] virtualization:  
	I1213 16:04:42.417845 1532633 out.go:179] * [no-preload-439544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 16:04:42.421555 1532633 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 16:04:42.421640 1532633 notify.go:221] Checking for updates...
	I1213 16:04:42.427687 1532633 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 16:04:42.430499 1532633 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:04:42.433392 1532633 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 16:04:42.436121 1532633 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 16:04:42.439040 1532633 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 16:04:42.442494 1532633 config.go:182] Loaded profile config "no-preload-439544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:04:42.443099 1532633 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 16:04:42.466960 1532633 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 16:04:42.467080 1532633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:04:42.529333 1532633 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:04:42.520259632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:04:42.529443 1532633 docker.go:319] overlay module found
	I1213 16:04:42.532652 1532633 out.go:179] * Using the docker driver based on existing profile
	I1213 16:04:42.535539 1532633 start.go:309] selected driver: docker
	I1213 16:04:42.535559 1532633 start.go:927] validating driver "docker" against &{Name:no-preload-439544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:04:42.535665 1532633 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 16:04:42.536328 1532633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:04:42.590849 1532633 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:04:42.581095747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:04:42.591180 1532633 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 16:04:42.591218 1532633 cni.go:84] Creating CNI manager for ""
	I1213 16:04:42.591273 1532633 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:04:42.591342 1532633 start.go:353] cluster config:
	{Name:no-preload-439544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:04:42.596381 1532633 out.go:179] * Starting "no-preload-439544" primary control-plane node in "no-preload-439544" cluster
	I1213 16:04:42.599266 1532633 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 16:04:42.602152 1532633 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 16:04:42.604937 1532633 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:04:42.605025 1532633 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 16:04:42.605107 1532633 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/config.json ...
	I1213 16:04:42.605412 1532633 cache.go:107] acquiring lock: {Name:mk6458bc7297def26ffc87aa852ed603976a017c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605492 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1213 16:04:42.605501 1532633 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.253µs
	I1213 16:04:42.605513 1532633 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1213 16:04:42.605528 1532633 cache.go:107] acquiring lock: {Name:mk04216f72d0f7cd3d2308def830acac11c8b85d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605561 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1213 16:04:42.605566 1532633 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 43.305µs
	I1213 16:04:42.605573 1532633 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1213 16:04:42.605582 1532633 cache.go:107] acquiring lock: {Name:mk2054b1540f1c54f9b25f5f78ec681c8220cfcf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605608 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1213 16:04:42.605613 1532633 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 31.647µs
	I1213 16:04:42.605619 1532633 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1213 16:04:42.605629 1532633 cache.go:107] acquiring lock: {Name:mke9c9289e43b08c6e721f866225f618ba3afddf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605654 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1213 16:04:42.605660 1532633 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 31.704µs
	I1213 16:04:42.605665 1532633 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1213 16:04:42.605674 1532633 cache.go:107] acquiring lock: {Name:mkd9f47dfe476ebd2c352fdee514a99c9fba7295 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605698 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1213 16:04:42.605703 1532633 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.621µs
	I1213 16:04:42.605709 1532633 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1213 16:04:42.605719 1532633 cache.go:107] acquiring lock: {Name:mkecf0483a10d405cf273c97b7180611bb889c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605749 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1213 16:04:42.605754 1532633 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 35.872µs
	I1213 16:04:42.605759 1532633 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1213 16:04:42.605768 1532633 cache.go:107] acquiring lock: {Name:mkb08190a177fa29b2e45167b12d4742acf808cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605793 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1213 16:04:42.605798 1532633 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 31.294µs
	I1213 16:04:42.605804 1532633 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1213 16:04:42.605812 1532633 cache.go:107] acquiring lock: {Name:mk18c875751b02ce01ad21e18c1d2a3a9ed5d930 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605845 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1213 16:04:42.605849 1532633 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 38.415µs
	I1213 16:04:42.605855 1532633 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1213 16:04:42.605861 1532633 cache.go:87] Successfully saved all images to host disk.
	I1213 16:04:42.624275 1532633 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 16:04:42.624299 1532633 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 16:04:42.624322 1532633 cache.go:243] Successfully downloaded all kic artifacts
	I1213 16:04:42.624352 1532633 start.go:360] acquireMachinesLock for no-preload-439544: {Name:mk6eb67fc85c056d1917e38b306c3e4e0ae30393 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.624426 1532633 start.go:364] duration metric: took 45.578µs to acquireMachinesLock for "no-preload-439544"
	I1213 16:04:42.624452 1532633 start.go:96] Skipping create...Using existing machine configuration
	I1213 16:04:42.624458 1532633 fix.go:54] fixHost starting: 
	I1213 16:04:42.624729 1532633 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 16:04:42.641391 1532633 fix.go:112] recreateIfNeeded on no-preload-439544: state=Stopped err=<nil>
	W1213 16:04:42.641430 1532633 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 16:04:42.644748 1532633 out.go:252] * Restarting existing docker container for "no-preload-439544" ...
	I1213 16:04:42.644834 1532633 cli_runner.go:164] Run: docker start no-preload-439544
	I1213 16:04:42.892931 1532633 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 16:04:42.919215 1532633 kic.go:430] container "no-preload-439544" state is running.
	I1213 16:04:42.919778 1532633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-439544
	I1213 16:04:42.944557 1532633 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/config.json ...
	I1213 16:04:42.944781 1532633 machine.go:94] provisionDockerMachine start ...
	I1213 16:04:42.944844 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:42.967340 1532633 main.go:143] libmachine: Using SSH client type: native
	I1213 16:04:42.967676 1532633 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34228 <nil> <nil>}
	I1213 16:04:42.967688 1532633 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 16:04:42.968381 1532633 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46966->127.0.0.1:34228: read: connection reset by peer
	I1213 16:04:46.127864 1532633 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-439544
	
	I1213 16:04:46.127889 1532633 ubuntu.go:182] provisioning hostname "no-preload-439544"
	I1213 16:04:46.127971 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:46.150540 1532633 main.go:143] libmachine: Using SSH client type: native
	I1213 16:04:46.150873 1532633 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34228 <nil> <nil>}
	I1213 16:04:46.150890 1532633 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-439544 && echo "no-preload-439544" | sudo tee /etc/hostname
	I1213 16:04:46.316630 1532633 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-439544
	
	I1213 16:04:46.316724 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:46.334085 1532633 main.go:143] libmachine: Using SSH client type: native
	I1213 16:04:46.334398 1532633 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34228 <nil> <nil>}
	I1213 16:04:46.334425 1532633 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-439544' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-439544/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-439544' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 16:04:46.483606 1532633 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 16:04:46.483691 1532633 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 16:04:46.483736 1532633 ubuntu.go:190] setting up certificates
	I1213 16:04:46.483755 1532633 provision.go:84] configureAuth start
	I1213 16:04:46.483823 1532633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-439544
	I1213 16:04:46.500162 1532633 provision.go:143] copyHostCerts
	I1213 16:04:46.500243 1532633 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 16:04:46.500259 1532633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 16:04:46.500337 1532633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 16:04:46.500448 1532633 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 16:04:46.500465 1532633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 16:04:46.500494 1532633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 16:04:46.500550 1532633 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 16:04:46.500561 1532633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 16:04:46.500585 1532633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 16:04:46.500639 1532633 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.no-preload-439544 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-439544]
	I1213 16:04:46.571887 1532633 provision.go:177] copyRemoteCerts
	I1213 16:04:46.571964 1532633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 16:04:46.572031 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:46.590720 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:46.699229 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 16:04:46.717692 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 16:04:46.736074 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 16:04:46.754498 1532633 provision.go:87] duration metric: took 270.718838ms to configureAuth
	I1213 16:04:46.754524 1532633 ubuntu.go:206] setting minikube options for container-runtime
	I1213 16:04:46.754723 1532633 config.go:182] Loaded profile config "no-preload-439544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:04:46.754730 1532633 machine.go:97] duration metric: took 3.809941558s to provisionDockerMachine
	I1213 16:04:46.754738 1532633 start.go:293] postStartSetup for "no-preload-439544" (driver="docker")
	I1213 16:04:46.754749 1532633 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 16:04:46.754799 1532633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 16:04:46.754840 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:46.773059 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:46.881154 1532633 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 16:04:46.885885 1532633 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 16:04:46.885916 1532633 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 16:04:46.885927 1532633 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 16:04:46.885987 1532633 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 16:04:46.886081 1532633 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 16:04:46.886202 1532633 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 16:04:46.895826 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:04:46.914821 1532633 start.go:296] duration metric: took 160.067146ms for postStartSetup
	I1213 16:04:46.914943 1532633 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 16:04:46.915004 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:46.933638 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:47.036731 1532633 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 16:04:47.041916 1532633 fix.go:56] duration metric: took 4.417449466s for fixHost
	I1213 16:04:47.041955 1532633 start.go:83] releasing machines lock for "no-preload-439544", held for 4.417501354s
	I1213 16:04:47.042027 1532633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-439544
	I1213 16:04:47.059436 1532633 ssh_runner.go:195] Run: cat /version.json
	I1213 16:04:47.059506 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:47.059506 1532633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 16:04:47.059564 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:47.084535 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:47.085394 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:47.187879 1532633 ssh_runner.go:195] Run: systemctl --version
	I1213 16:04:47.277224 1532633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 16:04:47.281744 1532633 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 16:04:47.281868 1532633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 16:04:47.289697 1532633 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 16:04:47.289723 1532633 start.go:496] detecting cgroup driver to use...
	I1213 16:04:47.289772 1532633 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 16:04:47.289839 1532633 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 16:04:47.306480 1532633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 16:04:47.320548 1532633 docker.go:218] disabling cri-docker service (if available) ...
	I1213 16:04:47.320616 1532633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 16:04:47.336688 1532633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 16:04:47.350304 1532633 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 16:04:47.479878 1532633 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 16:04:47.617602 1532633 docker.go:234] disabling docker service ...
	I1213 16:04:47.617669 1532633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 16:04:47.636022 1532633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 16:04:47.651078 1532633 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 16:04:47.763618 1532633 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 16:04:47.889857 1532633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 16:04:47.903250 1532633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 16:04:47.917785 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 16:04:47.928047 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 16:04:47.937137 1532633 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 16:04:47.937223 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 16:04:47.946706 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:04:47.956145 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 16:04:47.964976 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:04:47.973942 1532633 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 16:04:47.982426 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 16:04:47.991145 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 16:04:48.000472 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 16:04:48.013270 1532633 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 16:04:48.021912 1532633 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 16:04:48.030401 1532633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:04:48.154042 1532633 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 16:04:48.258872 1532633 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 16:04:48.258948 1532633 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 16:04:48.262883 1532633 start.go:564] Will wait 60s for crictl version
	I1213 16:04:48.262950 1532633 ssh_runner.go:195] Run: which crictl
	I1213 16:04:48.266721 1532633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 16:04:48.292243 1532633 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 16:04:48.292316 1532633 ssh_runner.go:195] Run: containerd --version
	I1213 16:04:48.313344 1532633 ssh_runner.go:195] Run: containerd --version
	I1213 16:04:48.341964 1532633 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 16:04:48.344943 1532633 cli_runner.go:164] Run: docker network inspect no-preload-439544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 16:04:48.371046 1532633 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 16:04:48.375277 1532633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:04:48.399899 1532633 kubeadm.go:884] updating cluster {Name:no-preload-439544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 16:04:48.400017 1532633 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:04:48.400067 1532633 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:04:48.428371 1532633 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:04:48.428396 1532633 cache_images.go:86] Images are preloaded, skipping loading
	I1213 16:04:48.428408 1532633 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 16:04:48.428505 1532633 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-439544 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 16:04:48.428573 1532633 ssh_runner.go:195] Run: sudo crictl info
	I1213 16:04:48.457647 1532633 cni.go:84] Creating CNI manager for ""
	I1213 16:04:48.457673 1532633 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:04:48.457695 1532633 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 16:04:48.457722 1532633 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-439544 NodeName:no-preload-439544 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 16:04:48.457839 1532633 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-439544"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 16:04:48.457908 1532633 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 16:04:48.465484 1532633 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 16:04:48.465565 1532633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 16:04:48.473169 1532633 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 16:04:48.486257 1532633 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 16:04:48.498821 1532633 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 16:04:48.514097 1532633 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 16:04:48.518017 1532633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:04:48.528671 1532633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:04:48.641355 1532633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:04:48.658852 1532633 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544 for IP: 192.168.85.2
	I1213 16:04:48.658874 1532633 certs.go:195] generating shared ca certs ...
	I1213 16:04:48.658891 1532633 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:04:48.659056 1532633 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 16:04:48.659112 1532633 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 16:04:48.659125 1532633 certs.go:257] generating profile certs ...
	I1213 16:04:48.659257 1532633 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/client.key
	I1213 16:04:48.659352 1532633 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/apiserver.key.75137389
	I1213 16:04:48.659412 1532633 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/proxy-client.key
	I1213 16:04:48.659543 1532633 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 16:04:48.659584 1532633 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 16:04:48.659597 1532633 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 16:04:48.659638 1532633 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 16:04:48.659667 1532633 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 16:04:48.659704 1532633 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 16:04:48.659762 1532633 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:04:48.660460 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 16:04:48.678510 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 16:04:48.696835 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 16:04:48.715192 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 16:04:48.736544 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 16:04:48.754814 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 16:04:48.773396 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 16:04:48.791284 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 16:04:48.809761 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 16:04:48.827867 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 16:04:48.845597 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 16:04:48.862990 1532633 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 16:04:48.875844 1532633 ssh_runner.go:195] Run: openssl version
	I1213 16:04:48.882335 1532633 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 16:04:48.889759 1532633 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 16:04:48.897307 1532633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 16:04:48.901108 1532633 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 16:04:48.901221 1532633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 16:04:48.942179 1532633 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 16:04:48.949998 1532633 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 16:04:48.957450 1532633 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 16:04:48.965192 1532633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 16:04:48.969267 1532633 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 16:04:48.969332 1532633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 16:04:49.010426 1532633 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 16:04:49.019213 1532633 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:04:49.026990 1532633 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 16:04:49.034610 1532633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:04:49.038616 1532633 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:04:49.038700 1532633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:04:49.079625 1532633 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 16:04:49.092345 1532633 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 16:04:49.097174 1532633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 16:04:49.138992 1532633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 16:04:49.179959 1532633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 16:04:49.220981 1532633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 16:04:49.263836 1532633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 16:04:49.305100 1532633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 16:04:49.346214 1532633 kubeadm.go:401] StartCluster: {Name:no-preload-439544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:04:49.346315 1532633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 16:04:49.346388 1532633 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 16:04:49.374870 1532633 cri.go:89] found id: ""
	I1213 16:04:49.374958 1532633 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 16:04:49.382718 1532633 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 16:04:49.382749 1532633 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 16:04:49.382843 1532633 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 16:04:49.392071 1532633 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 16:04:49.392512 1532633 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-439544" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:04:49.392626 1532633 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-1251074/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-439544" cluster setting kubeconfig missing "no-preload-439544" context setting]
	I1213 16:04:49.392945 1532633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:04:49.395692 1532633 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 16:04:49.403908 1532633 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 16:04:49.403991 1532633 kubeadm.go:602] duration metric: took 21.234385ms to restartPrimaryControlPlane
	I1213 16:04:49.404014 1532633 kubeadm.go:403] duration metric: took 57.808126ms to StartCluster
	I1213 16:04:49.404029 1532633 settings.go:142] acquiring lock: {Name:mk3e38eb9635dd950ddf8081cc0598a8c10049c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:04:49.404097 1532633 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:04:49.404746 1532633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:04:49.404991 1532633 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 16:04:49.405373 1532633 config.go:182] Loaded profile config "no-preload-439544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:04:49.405453 1532633 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 16:04:49.405529 1532633 addons.go:70] Setting storage-provisioner=true in profile "no-preload-439544"
	I1213 16:04:49.405551 1532633 addons.go:239] Setting addon storage-provisioner=true in "no-preload-439544"
	I1213 16:04:49.405574 1532633 host.go:66] Checking if "no-preload-439544" exists ...
	I1213 16:04:49.405617 1532633 addons.go:70] Setting dashboard=true in profile "no-preload-439544"
	I1213 16:04:49.405653 1532633 addons.go:239] Setting addon dashboard=true in "no-preload-439544"
	W1213 16:04:49.405672 1532633 addons.go:248] addon dashboard should already be in state true
	I1213 16:04:49.405720 1532633 host.go:66] Checking if "no-preload-439544" exists ...
	I1213 16:04:49.406068 1532633 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 16:04:49.406504 1532633 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 16:04:49.406575 1532633 addons.go:70] Setting default-storageclass=true in profile "no-preload-439544"
	I1213 16:04:49.406600 1532633 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-439544"
	I1213 16:04:49.406887 1532633 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 16:04:49.410533 1532633 out.go:179] * Verifying Kubernetes components...
	I1213 16:04:49.413615 1532633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:04:49.447417 1532633 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 16:04:49.451069 1532633 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:04:49.451101 1532633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 16:04:49.451201 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:49.463790 1532633 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 16:04:49.466503 1532633 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 16:04:49.473300 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 16:04:49.473383 1532633 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 16:04:49.473493 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:49.479179 1532633 addons.go:239] Setting addon default-storageclass=true in "no-preload-439544"
	I1213 16:04:49.479230 1532633 host.go:66] Checking if "no-preload-439544" exists ...
	I1213 16:04:49.479734 1532633 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 16:04:49.522588 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:49.545446 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:49.555551 1532633 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 16:04:49.555579 1532633 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 16:04:49.555649 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:49.583737 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:49.672869 1532633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:04:49.702326 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:04:49.726116 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 16:04:49.726144 1532633 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 16:04:49.731991 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 16:04:49.746280 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 16:04:49.746304 1532633 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 16:04:49.759419 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 16:04:49.759445 1532633 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 16:04:49.773846 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 16:04:49.773922 1532633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 16:04:49.788446 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 16:04:49.788520 1532633 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 16:04:49.801996 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 16:04:49.802073 1532633 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 16:04:49.815387 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 16:04:49.815464 1532633 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 16:04:49.828609 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 16:04:49.828684 1532633 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 16:04:49.862172 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:04:49.862245 1532633 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 16:04:49.898115 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:04:50.335585 1532633 node_ready.go:35] waiting up to 6m0s for node "no-preload-439544" to be "Ready" ...
	W1213 16:04:50.335668 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.335706 1532633 retry.go:31] will retry after 254.843686ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:50.335826 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.335840 1532633 retry.go:31] will retry after 189.333653ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:50.336064 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.336084 1532633 retry.go:31] will retry after 239.72839ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.525319 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 16:04:50.576944 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:04:50.591356 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:50.603642 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.603688 1532633 retry.go:31] will retry after 288.501165ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:50.701103 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.701138 1532633 retry.go:31] will retry after 467.260982ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:50.701217 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.701231 1532633 retry.go:31] will retry after 509.7977ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.893390 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:04:50.954719 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.954753 1532633 retry.go:31] will retry after 738.142646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.169190 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:04:51.211722 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:51.245032 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.245067 1532633 retry.go:31] will retry after 783.746721ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:51.279035 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.279081 1532633 retry.go:31] will retry after 291.424758ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.570765 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:51.626988 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.627029 1532633 retry.go:31] will retry after 1.041042015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.693422 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:04:51.750389 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.750422 1532633 retry.go:31] will retry after 685.062417ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:52.029491 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:04:52.108797 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:52.108902 1532633 retry.go:31] will retry after 939.299233ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:52.336815 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:04:52.436241 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:04:52.496715 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:52.496747 1532633 retry.go:31] will retry after 1.433097098s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:52.669004 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:52.730009 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:52.730041 1532633 retry.go:31] will retry after 640.138294ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:53.049072 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:04:53.112314 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:53.112422 1532633 retry.go:31] will retry after 1.734157912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:53.371175 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:53.437917 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:53.437956 1532633 retry.go:31] will retry after 2.49121489s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:53.930071 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:04:53.986900 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:53.986935 1532633 retry.go:31] will retry after 2.048688298s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:54.336885 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:04:54.847106 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:04:54.923019 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:54.923054 1532633 retry.go:31] will retry after 2.142030138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:55.930227 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:55.990258 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:55.990294 1532633 retry.go:31] will retry after 2.707811037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:56.036521 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:04:56.097317 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:56.097352 1532633 retry.go:31] will retry after 2.146665141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:56.836913 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:04:57.065333 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:04:57.147079 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:57.147117 1532633 retry.go:31] will retry after 3.792914481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:58.244261 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:04:58.304505 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:58.304538 1532633 retry.go:31] will retry after 3.360821909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:58.698362 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:58.754622 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:58.754653 1532633 retry.go:31] will retry after 5.541004931s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:59.336144 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:00.940480 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:05:01.003756 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:01.003802 1532633 retry.go:31] will retry after 2.96874462s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:01.336264 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:01.665917 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:05:01.728242 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:01.728275 1532633 retry.go:31] will retry after 8.916729655s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:03.336811 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:03.973522 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:05:04.037741 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:04.037776 1532633 retry.go:31] will retry after 6.210277542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:04.296383 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:05:04.360008 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:04.360045 1532633 retry.go:31] will retry after 7.195036005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:05.337054 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:07.836826 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:09.837041 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:10.248588 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:05:10.313237 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:10.313283 1532633 retry.go:31] will retry after 8.934777878s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:10.646200 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:05:10.705656 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:10.705690 1532633 retry.go:31] will retry after 12.190283501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:11.555705 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:05:11.661890 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:11.661924 1532633 retry.go:31] will retry after 5.300472002s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:12.336810 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:14.336968 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:16.337075 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:16.963159 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:05:17.023434 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:17.023464 1532633 retry.go:31] will retry after 7.246070268s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:18.836178 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:19.248832 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:05:19.312969 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:19.313003 1532633 retry.go:31] will retry after 13.568837967s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:20.836857 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:22.896385 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:05:22.954841 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:22.954869 1532633 retry.go:31] will retry after 19.284270803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:23.336898 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:24.270582 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:05:24.330461 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:24.330496 1532633 retry.go:31] will retry after 25.107997507s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:25.836832 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:27.837099 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:29.837229 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:32.337006 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:32.882520 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:05:32.944328 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:32.944368 1532633 retry.go:31] will retry after 16.148859129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:34.836937 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:37.337064 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:39.837056 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:42.239525 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:05:42.310135 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:42.310173 1532633 retry.go:31] will retry after 15.456030755s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:42.336738 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:44.336942 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:46.337118 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:48.836877 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:49.094336 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:05:49.194140 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:49.194179 1532633 retry.go:31] will retry after 37.565219756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:49.439413 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:05:49.497701 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:49.497737 1532633 retry.go:31] will retry after 28.907874152s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:51.336848 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:53.836235 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:55.836809 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:57.766432 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:05:57.827035 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:57.827069 1532633 retry.go:31] will retry after 21.817184299s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:58.336352 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:00.336702 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:02.337038 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:04.836820 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:06.836996 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:08.837192 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:11.337013 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:13.836907 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:16.336156 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:18.336864 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:06:18.406172 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:06:18.467162 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:06:18.467195 1532633 retry.go:31] will retry after 30.701956357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:06:19.645168 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:06:19.709360 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:06:19.709466 1532633 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 16:06:20.336963 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:22.337091 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:24.836902 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:06:26.760577 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:06:26.824828 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:06:26.824933 1532633 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 16:06:27.336805 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:06:27.802892 1527131 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001194483s
	I1213 16:06:27.802923 1527131 kubeadm.go:319] 
	I1213 16:06:27.803273 1527131 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 16:06:27.803399 1527131 kubeadm.go:319] 	- The kubelet is not running
	I1213 16:06:27.803765 1527131 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 16:06:27.803775 1527131 kubeadm.go:319] 
	I1213 16:06:27.803981 1527131 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 16:06:27.804042 1527131 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 16:06:27.804098 1527131 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 16:06:27.804106 1527131 kubeadm.go:319] 
	I1213 16:06:27.809079 1527131 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 16:06:27.809540 1527131 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 16:06:27.809697 1527131 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 16:06:27.810128 1527131 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 16:06:27.810147 1527131 kubeadm.go:319] 
	I1213 16:06:27.810227 1527131 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 16:06:27.810425 1527131 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-526531] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-526531] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001194483s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 16:06:27.810556 1527131 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 16:06:28.218967 1527131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 16:06:28.233104 1527131 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 16:06:28.233179 1527131 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 16:06:28.241250 1527131 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 16:06:28.241272 1527131 kubeadm.go:158] found existing configuration files:
	
	I1213 16:06:28.241325 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 16:06:28.249399 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 16:06:28.249464 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 16:06:28.257096 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 16:06:28.265010 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 16:06:28.265075 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 16:06:28.273325 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 16:06:28.281364 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 16:06:28.281443 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 16:06:28.289177 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 16:06:28.297335 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 16:06:28.297406 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 16:06:28.305336 1527131 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 16:06:28.346459 1527131 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 16:06:28.346706 1527131 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 16:06:28.412526 1527131 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 16:06:28.412656 1527131 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 16:06:28.412720 1527131 kubeadm.go:319] OS: Linux
	I1213 16:06:28.412796 1527131 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 16:06:28.412874 1527131 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 16:06:28.412953 1527131 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 16:06:28.413023 1527131 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 16:06:28.413091 1527131 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 16:06:28.413171 1527131 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 16:06:28.413247 1527131 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 16:06:28.413330 1527131 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 16:06:28.413409 1527131 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 16:06:28.487502 1527131 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 16:06:28.487768 1527131 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 16:06:28.487886 1527131 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 16:06:28.493209 1527131 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 16:06:28.498603 1527131 out.go:252]   - Generating certificates and keys ...
	I1213 16:06:28.498777 1527131 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 16:06:28.498875 1527131 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 16:06:28.498987 1527131 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 16:06:28.499079 1527131 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 16:06:28.499178 1527131 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 16:06:28.499261 1527131 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 16:06:28.499387 1527131 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 16:06:28.499489 1527131 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 16:06:28.499597 1527131 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 16:06:28.499699 1527131 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 16:06:28.499765 1527131 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 16:06:28.499849 1527131 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 16:06:28.647459 1527131 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 16:06:28.854581 1527131 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 16:06:29.198188 1527131 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 16:06:29.369603 1527131 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 16:06:29.759796 1527131 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 16:06:29.760686 1527131 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 16:06:29.763405 1527131 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 16:06:29.766742 1527131 out.go:252]   - Booting up control plane ...
	I1213 16:06:29.766921 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 16:06:29.767060 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 16:06:29.767160 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 16:06:29.788844 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 16:06:29.789113 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 16:06:29.796997 1527131 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 16:06:29.797476 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 16:06:29.797700 1527131 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 16:06:29.934060 1527131 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 16:06:29.934180 1527131 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1213 16:06:29.836878 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:32.336819 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:34.336911 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:36.836814 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:38.837068 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:41.336826 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:43.336942 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:45.836809 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:47.836978 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:06:49.169418 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:06:49.229366 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:06:49.229477 1532633 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 16:06:49.233284 1532633 out.go:179] * Enabled addons: 
	I1213 16:06:49.236115 1532633 addons.go:530] duration metric: took 1m59.83066349s for enable addons: enabled=[]
	W1213 16:06:50.336853 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:52.836975 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:55.336982 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:57.836819 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:59.837077 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:02.336884 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:04.337044 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:06.836907 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:09.336829 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:11.836902 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:13.836966 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:16.336991 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:18.836964 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:21.336861 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:23.336994 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:25.337136 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:27.837080 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:30.336834 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:32.336947 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:34.337009 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:36.836927 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:39.336872 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:41.836269 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:43.836773 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:45.837030 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:47.837167 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:50.336908 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:52.336995 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:54.836850 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:56.837113 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:59.336907 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:01.836519 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:03.836935 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:05.837188 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:08.336182 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:10.336290 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:12.836188 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:14.837007 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:17.336926 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:19.337137 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:21.836823 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:23.836887 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:26.336902 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:28.836875 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:30.837155 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:33.344927 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:35.836197 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:38.336221 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:40.336266 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:42.336937 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:44.837052 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:47.336949 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:49.337721 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:51.836216 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:54.336802 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:56.337015 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:58.337101 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:00.340034 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:02.837190 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:05.337007 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:07.836179 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:09.836379 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:12.336811 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:14.337024 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:16.836875 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:18.836958 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:21.336809 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:23.337044 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:25.337144 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:27.837183 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:30.336838 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:32.336966 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:34.836253 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:36.837105 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:39.336929 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:41.836911 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:44.336936 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:46.336992 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:48.837015 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:51.336072 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:53.336374 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:55.836834 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:57.837117 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:59.837157 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:02.336184 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:04.336871 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:06.336975 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:08.836835 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:10.836923 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:12.837238 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:15.336203 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:17.337025 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:19.837094 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:22.336928 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:24.836175 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:26.836947 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:10:29.934181 1527131 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000175102s
	I1213 16:10:29.934219 1527131 kubeadm.go:319] 
	I1213 16:10:29.934278 1527131 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 16:10:29.934315 1527131 kubeadm.go:319] 	- The kubelet is not running
	I1213 16:10:29.934420 1527131 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 16:10:29.934431 1527131 kubeadm.go:319] 
	I1213 16:10:29.934571 1527131 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 16:10:29.934616 1527131 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 16:10:29.934646 1527131 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 16:10:29.934653 1527131 kubeadm.go:319] 
	I1213 16:10:29.939000 1527131 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 16:10:29.939475 1527131 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 16:10:29.939605 1527131 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 16:10:29.939919 1527131 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 16:10:29.939941 1527131 kubeadm.go:319] 
	I1213 16:10:29.940021 1527131 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 16:10:29.940103 1527131 kubeadm.go:403] duration metric: took 8m6.466581637s to StartCluster
	I1213 16:10:29.940140 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:10:29.940207 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:10:29.965453 1527131 cri.go:89] found id: ""
	I1213 16:10:29.965477 1527131 logs.go:282] 0 containers: []
	W1213 16:10:29.965487 1527131 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:10:29.965493 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:10:29.965556 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:10:29.991522 1527131 cri.go:89] found id: ""
	I1213 16:10:29.991547 1527131 logs.go:282] 0 containers: []
	W1213 16:10:29.991560 1527131 logs.go:284] No container was found matching "etcd"
	I1213 16:10:29.991566 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:10:29.991628 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:10:30.032969 1527131 cri.go:89] found id: ""
	I1213 16:10:30.032993 1527131 logs.go:282] 0 containers: []
	W1213 16:10:30.033002 1527131 logs.go:284] No container was found matching "coredns"
	I1213 16:10:30.033008 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:10:30.033087 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:10:30.086903 1527131 cri.go:89] found id: ""
	I1213 16:10:30.086929 1527131 logs.go:282] 0 containers: []
	W1213 16:10:30.086937 1527131 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:10:30.086944 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:10:30.087018 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:10:30.120054 1527131 cri.go:89] found id: ""
	I1213 16:10:30.120085 1527131 logs.go:282] 0 containers: []
	W1213 16:10:30.120097 1527131 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:10:30.120106 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:10:30.120179 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:10:30.147481 1527131 cri.go:89] found id: ""
	I1213 16:10:30.147512 1527131 logs.go:282] 0 containers: []
	W1213 16:10:30.147521 1527131 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:10:30.147528 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:10:30.147597 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:10:30.175161 1527131 cri.go:89] found id: ""
	I1213 16:10:30.175192 1527131 logs.go:282] 0 containers: []
	W1213 16:10:30.175202 1527131 logs.go:284] No container was found matching "kindnet"
	I1213 16:10:30.175212 1527131 logs.go:123] Gathering logs for kubelet ...
	I1213 16:10:30.175227 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:10:30.236323 1527131 logs.go:123] Gathering logs for dmesg ...
	I1213 16:10:30.236366 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:10:30.252852 1527131 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:10:30.252882 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:10:30.323930 1527131 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:10:30.308479    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.309175    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.310915    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.318127    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.318780    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:10:30.308479    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.309175    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.310915    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.318127    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.318780    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:10:30.323954 1527131 logs.go:123] Gathering logs for containerd ...
	I1213 16:10:30.323966 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:10:30.363277 1527131 logs.go:123] Gathering logs for container status ...
	I1213 16:10:30.363323 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 16:10:30.390658 1527131 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000175102s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 16:10:30.390707 1527131 out.go:285] * 
	W1213 16:10:30.390758 1527131 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000175102s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 16:10:30.390773 1527131 out.go:285] * 
	W1213 16:10:30.392934 1527131 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 16:10:30.397735 1527131 out.go:203] 
	W1213 16:10:30.401437 1527131 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000175102s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 16:10:30.401483 1527131 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 16:10:30.401510 1527131 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 16:10:30.404721 1527131 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.763599012Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.763668968Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.763768436Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.763840582Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.763912326Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.763982060Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.764040315Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.764106184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.764177723Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.764268773Z" level=info msg="Connect containerd service"
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.764658655Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.765332239Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.781836915Z" level=info msg="Start subscribing containerd event"
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.782053583Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.782112346Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.782060179Z" level=info msg="Start recovering state"
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.821027191Z" level=info msg="Start event monitor"
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.821077496Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.821086587Z" level=info msg="Start streaming server"
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.821097803Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.821106861Z" level=info msg="runtime interface starting up..."
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.821113171Z" level=info msg="starting plugins..."
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.821124559Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 16:02:21 newest-cni-526531 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.823117415Z" level=info msg="containerd successfully booted in 0.082954s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:10:31.574128    5004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:31.574529    5004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:31.575980    5004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:31.576294    5004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:31.577731    5004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.759368] overlayfs: idmapped layers are currently not supported
	[Dec13 13:37] overlayfs: idmapped layers are currently not supported
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 16:10:31 up  7:52,  0 user,  load average: 0.05, 0.55, 1.22
	Linux newest-cni-526531 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 16:10:28 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:10:29 newest-cni-526531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 13 16:10:29 newest-cni-526531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:10:29 newest-cni-526531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:10:29 newest-cni-526531 kubelet[4807]: E1213 16:10:29.151690    4807 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:10:29 newest-cni-526531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:10:29 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:10:29 newest-cni-526531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 13 16:10:29 newest-cni-526531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:10:29 newest-cni-526531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:10:29 newest-cni-526531 kubelet[4813]: E1213 16:10:29.901823    4813 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:10:29 newest-cni-526531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:10:29 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:10:30 newest-cni-526531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 13 16:10:30 newest-cni-526531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:10:30 newest-cni-526531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:10:30 newest-cni-526531 kubelet[4900]: E1213 16:10:30.692268    4900 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:10:30 newest-cni-526531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:10:30 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:10:31 newest-cni-526531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 13 16:10:31 newest-cni-526531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:10:31 newest-cni-526531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:10:31 newest-cni-526531 kubelet[4974]: E1213 16:10:31.399024    4974 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:10:31 newest-cni-526531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:10:31 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-526531 -n newest-cni-526531
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-526531 -n newest-cni-526531: exit status 6 (394.781284ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 16:10:32.151990 1539795 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-526531" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-526531" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (501.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-439544 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-439544 create -f testdata/busybox.yaml: exit status 1 (65.936286ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-439544" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-439544 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-439544
helpers_test.go:244: (dbg) docker inspect no-preload-439544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d",
	        "Created": "2025-12-13T15:54:12.178460013Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1501116,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T15:54:12.242684028Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/hostname",
	        "HostsPath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/hosts",
	        "LogPath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d-json.log",
	        "Name": "/no-preload-439544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-439544:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-439544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d",
	                "LowerDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-439544",
	                "Source": "/var/lib/docker/volumes/no-preload-439544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-439544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-439544",
	                "name.minikube.sigs.k8s.io": "no-preload-439544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da8c56f1648b4b29d365160a5c9c8f4b83511f3b06bb300dab72442b5fe339b6",
	            "SandboxKey": "/var/run/docker/netns/da8c56f1648b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34193"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34194"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34197"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34195"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34196"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-439544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:8c:8a:2b:c2:e1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "77a202cfc0163f00e436e53caf7341626192055eeb5da6a6f5d953ced7f7adfb",
	                    "EndpointID": "2d33c5fac6c3fc25d8e7af1d5a5218284f13ab87b543c41deb4d4804231c62b5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-439544",
	                        "53d57adb0653"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-439544 -n no-preload-439544
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-439544 -n no-preload-439544: exit status 6 (320.048489ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 16:02:43.956614 1529632 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-439544" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-439544 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ unpause │ -p old-k8s-version-912710 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-912710       │ jenkins │ v1.37.0 │ 13 Dec 25 15:56 UTC │ 13 Dec 25 15:56 UTC │
	│ delete  │ -p old-k8s-version-912710                                                                                                                                                                                                                                  │ old-k8s-version-912710       │ jenkins │ v1.37.0 │ 13 Dec 25 15:56 UTC │ 13 Dec 25 15:56 UTC │
	│ delete  │ -p old-k8s-version-912710                                                                                                                                                                                                                                  │ old-k8s-version-912710       │ jenkins │ v1.37.0 │ 13 Dec 25 15:56 UTC │ 13 Dec 25 15:56 UTC │
	│ start   │ -p embed-certs-270324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:56 UTC │ 13 Dec 25 15:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-270324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:57 UTC │ 13 Dec 25 15:57 UTC │
	│ stop    │ -p embed-certs-270324 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:57 UTC │ 13 Dec 25 15:58 UTC │
	│ addons  │ enable dashboard -p embed-certs-270324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:58 UTC │ 13 Dec 25 15:58 UTC │
	│ start   │ -p embed-certs-270324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:58 UTC │ 13 Dec 25 15:58 UTC │
	│ image   │ embed-certs-270324 image list --format=json                                                                                                                                                                                                                │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ pause   │ -p embed-certs-270324 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ unpause │ -p embed-certs-270324 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p embed-certs-270324                                                                                                                                                                                                                                      │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p embed-certs-270324                                                                                                                                                                                                                                      │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p disable-driver-mounts-614298                                                                                                                                                                                                                            │ disable-driver-mounts-614298 │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ start   │ -p default-k8s-diff-port-946932 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 16:00 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-946932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:00 UTC │ 13 Dec 25 16:00 UTC │
	│ stop    │ -p default-k8s-diff-port-946932 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:00 UTC │ 13 Dec 25 16:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-946932 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:01 UTC │ 13 Dec 25 16:01 UTC │
	│ start   │ -p default-k8s-diff-port-946932 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:01 UTC │ 13 Dec 25 16:01 UTC │
	│ image   │ default-k8s-diff-port-946932 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ pause   │ -p default-k8s-diff-port-946932 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ unpause │ -p default-k8s-diff-port-946932 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ delete  │ -p default-k8s-diff-port-946932                                                                                                                                                                                                                            │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ delete  │ -p default-k8s-diff-port-946932                                                                                                                                                                                                                            │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ start   │ -p newest-cni-526531 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 16:02:10
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 16:02:10.653265 1527131 out.go:360] Setting OutFile to fd 1 ...
	I1213 16:02:10.653450 1527131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:02:10.653463 1527131 out.go:374] Setting ErrFile to fd 2...
	I1213 16:02:10.653469 1527131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:02:10.653723 1527131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 16:02:10.654178 1527131 out.go:368] Setting JSON to false
	I1213 16:02:10.655121 1527131 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":27880,"bootTime":1765613851,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 16:02:10.655187 1527131 start.go:143] virtualization:  
	I1213 16:02:10.659173 1527131 out.go:179] * [newest-cni-526531] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 16:02:10.663186 1527131 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 16:02:10.663301 1527131 notify.go:221] Checking for updates...
	I1213 16:02:10.669662 1527131 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 16:02:10.672735 1527131 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:02:10.675695 1527131 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 16:02:10.678798 1527131 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 16:02:10.681784 1527131 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 16:02:10.685234 1527131 config.go:182] Loaded profile config "no-preload-439544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:02:10.685327 1527131 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 16:02:10.712873 1527131 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 16:02:10.712998 1527131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:02:10.776591 1527131 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:02:10.767542878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:02:10.776698 1527131 docker.go:319] overlay module found
	I1213 16:02:10.779851 1527131 out.go:179] * Using the docker driver based on user configuration
	I1213 16:02:10.782749 1527131 start.go:309] selected driver: docker
	I1213 16:02:10.782766 1527131 start.go:927] validating driver "docker" against <nil>
	I1213 16:02:10.782781 1527131 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 16:02:10.783532 1527131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:02:10.836394 1527131 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:02:10.826578222 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:02:10.836552 1527131 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 16:02:10.836580 1527131 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 16:02:10.836798 1527131 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 16:02:10.839799 1527131 out.go:179] * Using Docker driver with root privileges
	I1213 16:02:10.842710 1527131 cni.go:84] Creating CNI manager for ""
	I1213 16:02:10.842780 1527131 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:02:10.842796 1527131 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 16:02:10.842882 1527131 start.go:353] cluster config:
	{Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:02:10.846082 1527131 out.go:179] * Starting "newest-cni-526531" primary control-plane node in "newest-cni-526531" cluster
	I1213 16:02:10.848967 1527131 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 16:02:10.851950 1527131 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 16:02:10.854779 1527131 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:02:10.854844 1527131 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 16:02:10.854855 1527131 cache.go:65] Caching tarball of preloaded images
	I1213 16:02:10.854853 1527131 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 16:02:10.854953 1527131 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 16:02:10.854964 1527131 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 16:02:10.855092 1527131 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/config.json ...
	I1213 16:02:10.855111 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/config.json: {Name:mk86a24d01142c8f16a845d4170f48ade207872d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:10.882520 1527131 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 16:02:10.882541 1527131 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 16:02:10.882562 1527131 cache.go:243] Successfully downloaded all kic artifacts
	I1213 16:02:10.882591 1527131 start.go:360] acquireMachinesLock for newest-cni-526531: {Name:mk8328ad899404812480e264edd6e13cbbd26230 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:02:10.883398 1527131 start.go:364] duration metric: took 789.437µs to acquireMachinesLock for "newest-cni-526531"
	I1213 16:02:10.883434 1527131 start.go:93] Provisioning new machine with config: &{Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 16:02:10.883509 1527131 start.go:125] createHost starting for "" (driver="docker")
	I1213 16:02:10.886860 1527131 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 16:02:10.887084 1527131 start.go:159] libmachine.API.Create for "newest-cni-526531" (driver="docker")
	I1213 16:02:10.887118 1527131 client.go:173] LocalClient.Create starting
	I1213 16:02:10.887190 1527131 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem
	I1213 16:02:10.887231 1527131 main.go:143] libmachine: Decoding PEM data...
	I1213 16:02:10.887246 1527131 main.go:143] libmachine: Parsing certificate...
	I1213 16:02:10.887296 1527131 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem
	I1213 16:02:10.887414 1527131 main.go:143] libmachine: Decoding PEM data...
	I1213 16:02:10.887431 1527131 main.go:143] libmachine: Parsing certificate...
	I1213 16:02:10.887816 1527131 cli_runner.go:164] Run: docker network inspect newest-cni-526531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 16:02:10.908607 1527131 cli_runner.go:211] docker network inspect newest-cni-526531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 16:02:10.908685 1527131 network_create.go:284] running [docker network inspect newest-cni-526531] to gather additional debugging logs...
	I1213 16:02:10.908709 1527131 cli_runner.go:164] Run: docker network inspect newest-cni-526531
	W1213 16:02:10.924665 1527131 cli_runner.go:211] docker network inspect newest-cni-526531 returned with exit code 1
	I1213 16:02:10.924698 1527131 network_create.go:287] error running [docker network inspect newest-cni-526531]: docker network inspect newest-cni-526531: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-526531 not found
	I1213 16:02:10.924713 1527131 network_create.go:289] output of [docker network inspect newest-cni-526531]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-526531 not found
	
	** /stderr **
	I1213 16:02:10.924834 1527131 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 16:02:10.945123 1527131 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2ccf9a9eb2c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:9e:64:ed:f4:f2} reservation:<nil>}
	I1213 16:02:10.945400 1527131 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f2add06a95dc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:3f:19:f0:2f:b1} reservation:<nil>}
	I1213 16:02:10.945650 1527131 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8517ffe6861d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:d6:d4:51:8b:3d} reservation:<nil>}
	I1213 16:02:10.946092 1527131 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a39030}
	I1213 16:02:10.946118 1527131 network_create.go:124] attempt to create docker network newest-cni-526531 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 16:02:10.946180 1527131 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-526531 newest-cni-526531
	I1213 16:02:11.005690 1527131 network_create.go:108] docker network newest-cni-526531 192.168.76.0/24 created
	I1213 16:02:11.005737 1527131 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-526531" container
	I1213 16:02:11.005844 1527131 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 16:02:11.023684 1527131 cli_runner.go:164] Run: docker volume create newest-cni-526531 --label name.minikube.sigs.k8s.io=newest-cni-526531 --label created_by.minikube.sigs.k8s.io=true
	I1213 16:02:11.043087 1527131 oci.go:103] Successfully created a docker volume newest-cni-526531
	I1213 16:02:11.043189 1527131 cli_runner.go:164] Run: docker run --rm --name newest-cni-526531-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-526531 --entrypoint /usr/bin/test -v newest-cni-526531:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 16:02:11.614357 1527131 oci.go:107] Successfully prepared a docker volume newest-cni-526531
	I1213 16:02:11.614420 1527131 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:02:11.614431 1527131 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 16:02:11.614506 1527131 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-526531:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 16:02:15.477407 1527131 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-526531:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.862862091s)
	I1213 16:02:15.477459 1527131 kic.go:203] duration metric: took 3.863024311s to extract preloaded images to volume ...
	W1213 16:02:15.477597 1527131 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 16:02:15.477708 1527131 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 16:02:15.532223 1527131 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-526531 --name newest-cni-526531 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-526531 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-526531 --network newest-cni-526531 --ip 192.168.76.2 --volume newest-cni-526531:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 16:02:15.845102 1527131 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Running}}
	I1213 16:02:15.866861 1527131 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:02:15.892916 1527131 cli_runner.go:164] Run: docker exec newest-cni-526531 stat /var/lib/dpkg/alternatives/iptables
	I1213 16:02:15.948563 1527131 oci.go:144] the created container "newest-cni-526531" has a running status.
	I1213 16:02:15.948590 1527131 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa...
	I1213 16:02:16.266786 1527131 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 16:02:16.296564 1527131 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:02:16.329593 1527131 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 16:02:16.329619 1527131 kic_runner.go:114] Args: [docker exec --privileged newest-cni-526531 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 16:02:16.396781 1527131 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:02:16.416507 1527131 machine.go:94] provisionDockerMachine start ...
	I1213 16:02:16.416610 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:16.437096 1527131 main.go:143] libmachine: Using SSH client type: native
	I1213 16:02:16.437445 1527131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34223 <nil> <nil>}
	I1213 16:02:16.437455 1527131 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 16:02:16.438031 1527131 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45440->127.0.0.1:34223: read: connection reset by peer
	I1213 16:02:19.590785 1527131 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-526531
	
	I1213 16:02:19.590808 1527131 ubuntu.go:182] provisioning hostname "newest-cni-526531"
	I1213 16:02:19.590880 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:19.609205 1527131 main.go:143] libmachine: Using SSH client type: native
	I1213 16:02:19.609519 1527131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34223 <nil> <nil>}
	I1213 16:02:19.609531 1527131 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-526531 && echo "newest-cni-526531" | sudo tee /etc/hostname
	I1213 16:02:19.768653 1527131 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-526531
	
	I1213 16:02:19.768776 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:19.785859 1527131 main.go:143] libmachine: Using SSH client type: native
	I1213 16:02:19.786173 1527131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34223 <nil> <nil>}
	I1213 16:02:19.786190 1527131 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-526531' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-526531/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-526531' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 16:02:19.943619 1527131 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 16:02:19.943646 1527131 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 16:02:19.943683 1527131 ubuntu.go:190] setting up certificates
	I1213 16:02:19.943694 1527131 provision.go:84] configureAuth start
	I1213 16:02:19.943767 1527131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:02:19.960971 1527131 provision.go:143] copyHostCerts
	I1213 16:02:19.961044 1527131 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 16:02:19.961058 1527131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 16:02:19.961139 1527131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 16:02:19.961239 1527131 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 16:02:19.961249 1527131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 16:02:19.961277 1527131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 16:02:19.961346 1527131 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 16:02:19.961355 1527131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 16:02:19.961380 1527131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 16:02:19.961441 1527131 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.newest-cni-526531 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-526531]
	I1213 16:02:20.054612 1527131 provision.go:177] copyRemoteCerts
	I1213 16:02:20.054686 1527131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 16:02:20.054736 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.072851 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.179668 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 16:02:20.198845 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 16:02:20.217676 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 16:02:20.236010 1527131 provision.go:87] duration metric: took 292.302594ms to configureAuth
	I1213 16:02:20.236050 1527131 ubuntu.go:206] setting minikube options for container-runtime
	I1213 16:02:20.236287 1527131 config.go:182] Loaded profile config "newest-cni-526531": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:02:20.236298 1527131 machine.go:97] duration metric: took 3.819772251s to provisionDockerMachine
	I1213 16:02:20.236311 1527131 client.go:176] duration metric: took 9.349180869s to LocalClient.Create
	I1213 16:02:20.236333 1527131 start.go:167] duration metric: took 9.349249118s to libmachine.API.Create "newest-cni-526531"
	I1213 16:02:20.236344 1527131 start.go:293] postStartSetup for "newest-cni-526531" (driver="docker")
	I1213 16:02:20.236355 1527131 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 16:02:20.236412 1527131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 16:02:20.236459 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.253931 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.359511 1527131 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 16:02:20.363075 1527131 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 16:02:20.363102 1527131 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 16:02:20.363114 1527131 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 16:02:20.363170 1527131 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 16:02:20.363253 1527131 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 16:02:20.363383 1527131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 16:02:20.370977 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:02:20.388811 1527131 start.go:296] duration metric: took 152.451817ms for postStartSetup
	I1213 16:02:20.389184 1527131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:02:20.406647 1527131 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/config.json ...
	I1213 16:02:20.406930 1527131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 16:02:20.406975 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.424459 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.529476 1527131 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 16:02:20.539030 1527131 start.go:128] duration metric: took 9.655490819s to createHost
	I1213 16:02:20.539056 1527131 start.go:83] releasing machines lock for "newest-cni-526531", held for 9.655642684s
	I1213 16:02:20.539196 1527131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:02:20.566091 1527131 ssh_runner.go:195] Run: cat /version.json
	I1213 16:02:20.566128 1527131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 16:02:20.566142 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.566184 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.588830 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.608973 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.799431 1527131 ssh_runner.go:195] Run: systemctl --version
	I1213 16:02:20.806227 1527131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 16:02:20.810716 1527131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 16:02:20.810789 1527131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 16:02:20.839037 1527131 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 16:02:20.839104 1527131 start.go:496] detecting cgroup driver to use...
	I1213 16:02:20.839151 1527131 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 16:02:20.839236 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 16:02:20.854464 1527131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 16:02:20.867574 1527131 docker.go:218] disabling cri-docker service (if available) ...
	I1213 16:02:20.867669 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 16:02:20.885257 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 16:02:20.903596 1527131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 16:02:21.022899 1527131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 16:02:21.152487 1527131 docker.go:234] disabling docker service ...
	I1213 16:02:21.152550 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 16:02:21.174727 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 16:02:21.188382 1527131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 16:02:21.299657 1527131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 16:02:21.434130 1527131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 16:02:21.446805 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 16:02:21.461400 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 16:02:21.470517 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 16:02:21.479694 1527131 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 16:02:21.479759 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 16:02:21.494124 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:02:21.502957 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 16:02:21.512551 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:02:21.521611 1527131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 16:02:21.530083 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 16:02:21.539325 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 16:02:21.548742 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 16:02:21.557617 1527131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 16:02:21.565268 1527131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 16:02:21.572714 1527131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:02:21.683769 1527131 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 16:02:21.823560 1527131 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 16:02:21.823710 1527131 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 16:02:21.827515 1527131 start.go:564] Will wait 60s for crictl version
	I1213 16:02:21.827583 1527131 ssh_runner.go:195] Run: which crictl
	I1213 16:02:21.831175 1527131 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 16:02:21.854565 1527131 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 16:02:21.854637 1527131 ssh_runner.go:195] Run: containerd --version
	I1213 16:02:21.878720 1527131 ssh_runner.go:195] Run: containerd --version
	I1213 16:02:21.901809 1527131 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 16:02:21.904695 1527131 cli_runner.go:164] Run: docker network inspect newest-cni-526531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 16:02:21.920670 1527131 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 16:02:21.924637 1527131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:02:21.937646 1527131 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 16:02:21.940537 1527131 kubeadm.go:884] updating cluster {Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 16:02:21.940697 1527131 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:02:21.940787 1527131 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:02:21.972241 1527131 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:02:21.972268 1527131 containerd.go:534] Images already preloaded, skipping extraction
	I1213 16:02:21.972335 1527131 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:02:22.011228 1527131 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:02:22.011254 1527131 cache_images.go:86] Images are preloaded, skipping loading
	I1213 16:02:22.011263 1527131 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 16:02:22.011415 1527131 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-526531 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 16:02:22.011503 1527131 ssh_runner.go:195] Run: sudo crictl info
	I1213 16:02:22.037059 1527131 cni.go:84] Creating CNI manager for ""
	I1213 16:02:22.037085 1527131 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:02:22.037100 1527131 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 16:02:22.037123 1527131 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-526531 NodeName:newest-cni-526531 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 16:02:22.037245 1527131 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-526531"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 16:02:22.037324 1527131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 16:02:22.045616 1527131 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 16:02:22.045746 1527131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 16:02:22.054164 1527131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 16:02:22.068023 1527131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 16:02:22.085623 1527131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 16:02:22.101118 1527131 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 16:02:22.105257 1527131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:02:22.115696 1527131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:02:22.236674 1527131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:02:22.253725 1527131 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531 for IP: 192.168.76.2
	I1213 16:02:22.253801 1527131 certs.go:195] generating shared ca certs ...
	I1213 16:02:22.253832 1527131 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.254016 1527131 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 16:02:22.254124 1527131 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 16:02:22.254153 1527131 certs.go:257] generating profile certs ...
	I1213 16:02:22.254236 1527131 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.key
	I1213 16:02:22.254267 1527131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.crt with IP's: []
	I1213 16:02:22.746862 1527131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.crt ...
	I1213 16:02:22.746902 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.crt: {Name:mk7b618219326f9fba540570e126db6afef7db97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.747100 1527131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.key ...
	I1213 16:02:22.747113 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.key: {Name:mkadefb7fb5fbcd2154d988162829a52daab8655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.747208 1527131 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key.b007e6c7
	I1213 16:02:22.747225 1527131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt.b007e6c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1213 16:02:22.809461 1527131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt.b007e6c7 ...
	I1213 16:02:22.809493 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt.b007e6c7: {Name:mkce6931933926d60edd03298cb3538c188eea65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.809651 1527131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key.b007e6c7 ...
	I1213 16:02:22.809660 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key.b007e6c7: {Name:mk5267764b911bf176ac97c9b4dd7d199f6b5ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.809731 1527131 certs.go:382] copying /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt.b007e6c7 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt
	I1213 16:02:22.809817 1527131 certs.go:386] copying /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key.b007e6c7 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key
	I1213 16:02:22.809875 1527131 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key
	I1213 16:02:22.809898 1527131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.crt with IP's: []
	I1213 16:02:23.001038 1527131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.crt ...
	I1213 16:02:23.001077 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.crt: {Name:mk387ba28125d038f533411623a4bd220070ddcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:23.002037 1527131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key ...
	I1213 16:02:23.002079 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key: {Name:mk1a039510f32e55e5dd18d9c94a59fef628608a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:23.002321 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 16:02:23.002370 1527131 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 16:02:23.002380 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 16:02:23.002408 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 16:02:23.002444 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 16:02:23.002470 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 16:02:23.002520 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:02:23.003157 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 16:02:23.024481 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 16:02:23.042947 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 16:02:23.062246 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 16:02:23.080909 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 16:02:23.101609 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 16:02:23.121532 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 16:02:23.141397 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1213 16:02:23.162222 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 16:02:23.180800 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 16:02:23.199086 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 16:02:23.216531 1527131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 16:02:23.229620 1527131 ssh_runner.go:195] Run: openssl version
	I1213 16:02:23.236222 1527131 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 16:02:23.244051 1527131 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 16:02:23.251982 1527131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 16:02:23.255821 1527131 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 16:02:23.255903 1527131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 16:02:23.297335 1527131 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 16:02:23.305087 1527131 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12529342.pem /etc/ssl/certs/3ec20f2e.0
	I1213 16:02:23.312878 1527131 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:02:23.320527 1527131 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 16:02:23.328098 1527131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:02:23.331918 1527131 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:02:23.331997 1527131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:02:23.373256 1527131 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 16:02:23.381999 1527131 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 16:02:23.389673 1527131 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 16:02:23.397973 1527131 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 16:02:23.406099 1527131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 16:02:23.410027 1527131 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 16:02:23.410090 1527131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 16:02:23.453652 1527131 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 16:02:23.461102 1527131 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1252934.pem /etc/ssl/certs/51391683.0
	I1213 16:02:23.469641 1527131 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 16:02:23.473464 1527131 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 16:02:23.473520 1527131 kubeadm.go:401] StartCluster: {Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:02:23.473612 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 16:02:23.473675 1527131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 16:02:23.501906 1527131 cri.go:89] found id: ""
	I1213 16:02:23.501976 1527131 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 16:02:23.509856 1527131 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 16:02:23.517759 1527131 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 16:02:23.517824 1527131 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 16:02:23.525757 1527131 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 16:02:23.525778 1527131 kubeadm.go:158] found existing configuration files:
	
	I1213 16:02:23.525864 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 16:02:23.533675 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 16:02:23.533781 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 16:02:23.541421 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 16:02:23.549139 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 16:02:23.549209 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 16:02:23.556514 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 16:02:23.563859 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 16:02:23.563926 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 16:02:23.571345 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 16:02:23.578972 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 16:02:23.579034 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 16:02:23.588349 1527131 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 16:02:23.644568 1527131 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 16:02:23.644844 1527131 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 16:02:23.719501 1527131 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 16:02:23.719596 1527131 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 16:02:23.719638 1527131 kubeadm.go:319] OS: Linux
	I1213 16:02:23.719695 1527131 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 16:02:23.719756 1527131 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 16:02:23.719822 1527131 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 16:02:23.719885 1527131 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 16:02:23.719948 1527131 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 16:02:23.720014 1527131 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 16:02:23.720065 1527131 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 16:02:23.720126 1527131 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 16:02:23.720184 1527131 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 16:02:23.799280 1527131 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 16:02:23.799447 1527131 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 16:02:23.799586 1527131 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 16:02:23.813871 1527131 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 16:02:23.820586 1527131 out.go:252]   - Generating certificates and keys ...
	I1213 16:02:23.820722 1527131 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 16:02:23.820831 1527131 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 16:02:24.062915 1527131 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 16:02:24.119432 1527131 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 16:02:24.837877 1527131 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 16:02:25.323783 1527131 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 16:02:25.382177 1527131 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 16:02:25.382477 1527131 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-526531] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 16:02:25.533405 1527131 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 16:02:25.533842 1527131 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-526531] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 16:02:25.796805 1527131 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 16:02:25.975896 1527131 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 16:02:26.105650 1527131 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 16:02:26.105962 1527131 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 16:02:26.444172 1527131 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 16:02:26.939066 1527131 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 16:02:27.121431 1527131 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 16:02:27.579446 1527131 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 16:02:27.628725 1527131 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 16:02:27.629390 1527131 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 16:02:27.631991 1527131 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 16:02:27.635735 1527131 out.go:252]   - Booting up control plane ...
	I1213 16:02:27.635847 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 16:02:27.635926 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 16:02:27.635993 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 16:02:27.657055 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 16:02:27.657166 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 16:02:27.664926 1527131 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 16:02:27.665403 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 16:02:27.665639 1527131 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 16:02:27.803169 1527131 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 16:02:27.803302 1527131 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 16:02:41.545926 1500765 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 16:02:41.546108 1500765 kubeadm.go:319] 
	I1213 16:02:41.546236 1500765 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 16:02:41.551134 1500765 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 16:02:41.551190 1500765 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 16:02:41.551289 1500765 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 16:02:41.551373 1500765 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 16:02:41.551414 1500765 kubeadm.go:319] OS: Linux
	I1213 16:02:41.551459 1500765 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 16:02:41.551511 1500765 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 16:02:41.551561 1500765 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 16:02:41.551612 1500765 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 16:02:41.551663 1500765 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 16:02:41.551715 1500765 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 16:02:41.551764 1500765 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 16:02:41.551816 1500765 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 16:02:41.551866 1500765 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 16:02:41.551941 1500765 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 16:02:41.552042 1500765 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 16:02:41.552133 1500765 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 16:02:41.552199 1500765 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 16:02:41.555522 1500765 out.go:252]   - Generating certificates and keys ...
	I1213 16:02:41.555641 1500765 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 16:02:41.555717 1500765 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 16:02:41.555797 1500765 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 16:02:41.555873 1500765 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 16:02:41.555970 1500765 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 16:02:41.556031 1500765 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 16:02:41.556110 1500765 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 16:02:41.556213 1500765 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 16:02:41.556310 1500765 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 16:02:41.556431 1500765 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 16:02:41.556486 1500765 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 16:02:41.556559 1500765 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 16:02:41.556617 1500765 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 16:02:41.556678 1500765 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 16:02:41.556736 1500765 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 16:02:41.556817 1500765 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 16:02:41.556888 1500765 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 16:02:41.556980 1500765 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 16:02:41.557075 1500765 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 16:02:41.560042 1500765 out.go:252]   - Booting up control plane ...
	I1213 16:02:41.560143 1500765 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 16:02:41.560258 1500765 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 16:02:41.560348 1500765 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 16:02:41.560479 1500765 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 16:02:41.560588 1500765 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 16:02:41.560701 1500765 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 16:02:41.560824 1500765 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 16:02:41.560880 1500765 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 16:02:41.561017 1500765 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 16:02:41.561131 1500765 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 16:02:41.561233 1500765 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000293839s
	I1213 16:02:41.561265 1500765 kubeadm.go:319] 
	I1213 16:02:41.561329 1500765 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 16:02:41.561367 1500765 kubeadm.go:319] 	- The kubelet is not running
	I1213 16:02:41.561492 1500765 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 16:02:41.561506 1500765 kubeadm.go:319] 
	I1213 16:02:41.561630 1500765 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 16:02:41.561673 1500765 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 16:02:41.561708 1500765 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 16:02:41.561776 1500765 kubeadm.go:319] 
	I1213 16:02:41.561777 1500765 kubeadm.go:403] duration metric: took 8m8.131517099s to StartCluster
	I1213 16:02:41.561824 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:02:41.561903 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:02:41.594564 1500765 cri.go:89] found id: ""
	I1213 16:02:41.594594 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.594603 1500765 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:02:41.594609 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:02:41.594677 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:02:41.629231 1500765 cri.go:89] found id: ""
	I1213 16:02:41.629252 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.629260 1500765 logs.go:284] No container was found matching "etcd"
	I1213 16:02:41.629266 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:02:41.629322 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:02:41.656157 1500765 cri.go:89] found id: ""
	I1213 16:02:41.656181 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.656190 1500765 logs.go:284] No container was found matching "coredns"
	I1213 16:02:41.656196 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:02:41.656276 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:02:41.681173 1500765 cri.go:89] found id: ""
	I1213 16:02:41.681208 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.681217 1500765 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:02:41.681224 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:02:41.681308 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:02:41.708543 1500765 cri.go:89] found id: ""
	I1213 16:02:41.708568 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.708577 1500765 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:02:41.708583 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:02:41.708660 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:02:41.737039 1500765 cri.go:89] found id: ""
	I1213 16:02:41.737062 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.737071 1500765 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:02:41.737079 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:02:41.737137 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:02:41.762249 1500765 cri.go:89] found id: ""
	I1213 16:02:41.762275 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.762283 1500765 logs.go:284] No container was found matching "kindnet"
	I1213 16:02:41.762294 1500765 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:02:41.762306 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:02:41.828774 1500765 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:02:41.820905    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.821458    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.823177    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.823750    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.825347    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:02:41.820905    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.821458    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.823177    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.823750    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.825347    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:02:41.828797 1500765 logs.go:123] Gathering logs for containerd ...
	I1213 16:02:41.828810 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:02:41.870479 1500765 logs.go:123] Gathering logs for container status ...
	I1213 16:02:41.870512 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:02:41.897347 1500765 logs.go:123] Gathering logs for kubelet ...
	I1213 16:02:41.897374 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:02:41.954515 1500765 logs.go:123] Gathering logs for dmesg ...
	I1213 16:02:41.954549 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 16:02:41.971648 1500765 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000293839s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 16:02:41.971703 1500765 out.go:285] * 
	W1213 16:02:41.971970 1500765 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000293839s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 16:02:41.971990 1500765 out.go:285] * 
	W1213 16:02:41.974206 1500765 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 16:02:41.979727 1500765 out.go:203] 
	W1213 16:02:41.982586 1500765 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000293839s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 16:02:41.982624 1500765 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 16:02:41.982645 1500765 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 16:02:41.985873 1500765 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 15:54:23 no-preload-439544 containerd[760]: time="2025-12-13T15:54:23.378148111Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:24 no-preload-439544 containerd[760]: time="2025-12-13T15:54:24.600685306Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 13 15:54:24 no-preload-439544 containerd[760]: time="2025-12-13T15:54:24.603732906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 13 15:54:24 no-preload-439544 containerd[760]: time="2025-12-13T15:54:24.611915029Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:24 no-preload-439544 containerd[760]: time="2025-12-13T15:54:24.613116551Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:25 no-preload-439544 containerd[760]: time="2025-12-13T15:54:25.503138610Z" level=info msg="No images store for sha256:84ea4651cf4d4486006d1346129c6964687be99508987d0ca606406fbc15a298"
	Dec 13 15:54:25 no-preload-439544 containerd[760]: time="2025-12-13T15:54:25.506879683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\""
	Dec 13 15:54:25 no-preload-439544 containerd[760]: time="2025-12-13T15:54:25.528281020Z" level=info msg="ImageCreate event name:\"sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:25 no-preload-439544 containerd[760]: time="2025-12-13T15:54:25.529509930Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:27 no-preload-439544 containerd[760]: time="2025-12-13T15:54:27.056611379Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 13 15:54:27 no-preload-439544 containerd[760]: time="2025-12-13T15:54:27.059970700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 13 15:54:27 no-preload-439544 containerd[760]: time="2025-12-13T15:54:27.072962113Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:27 no-preload-439544 containerd[760]: time="2025-12-13T15:54:27.074433027Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:28 no-preload-439544 containerd[760]: time="2025-12-13T15:54:28.221784082Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 13 15:54:28 no-preload-439544 containerd[760]: time="2025-12-13T15:54:28.224970821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 13 15:54:28 no-preload-439544 containerd[760]: time="2025-12-13T15:54:28.232633350Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:28 no-preload-439544 containerd[760]: time="2025-12-13T15:54:28.233266000Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.393544387Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.395762984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.407681609Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.408407697Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.791409724Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.793787530Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.800749932Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.801079615Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:02:44.614803    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:44.615589    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:44.617356    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:44.617909    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:44.619515    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.759368] overlayfs: idmapped layers are currently not supported
	[Dec13 13:37] overlayfs: idmapped layers are currently not supported
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 16:02:44 up  7:45,  0 user,  load average: 1.32, 1.71, 1.84
	Linux no-preload-439544 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 16:02:41 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:02:42 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 13 16:02:42 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:02:42 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:02:42 no-preload-439544 kubelet[5473]: E1213 16:02:42.388098    5473 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:02:42 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:02:42 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:02:43 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 13 16:02:43 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:02:43 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:02:43 no-preload-439544 kubelet[5569]: E1213 16:02:43.152083    5569 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:02:43 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:02:43 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:02:43 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 13 16:02:43 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:02:43 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:02:43 no-preload-439544 kubelet[5606]: E1213 16:02:43.878177    5606 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:02:43 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:02:43 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:02:44 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 13 16:02:44 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:02:44 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:02:44 no-preload-439544 kubelet[5701]: E1213 16:02:44.646715    5701 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:02:44 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:02:44 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-439544 -n no-preload-439544
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-439544 -n no-preload-439544: exit status 6 (479.199898ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 16:02:45.209150 1529859 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-439544" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-439544" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-439544
helpers_test.go:244: (dbg) docker inspect no-preload-439544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d",
	        "Created": "2025-12-13T15:54:12.178460013Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1501116,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T15:54:12.242684028Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/hostname",
	        "HostsPath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/hosts",
	        "LogPath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d-json.log",
	        "Name": "/no-preload-439544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-439544:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-439544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d",
	                "LowerDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-439544",
	                "Source": "/var/lib/docker/volumes/no-preload-439544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-439544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-439544",
	                "name.minikube.sigs.k8s.io": "no-preload-439544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da8c56f1648b4b29d365160a5c9c8f4b83511f3b06bb300dab72442b5fe339b6",
	            "SandboxKey": "/var/run/docker/netns/da8c56f1648b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34193"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34194"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34197"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34195"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34196"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-439544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:8c:8a:2b:c2:e1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "77a202cfc0163f00e436e53caf7341626192055eeb5da6a6f5d953ced7f7adfb",
	                    "EndpointID": "2d33c5fac6c3fc25d8e7af1d5a5218284f13ab87b543c41deb4d4804231c62b5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-439544",
	                        "53d57adb0653"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-439544 -n no-preload-439544
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-439544 -n no-preload-439544: exit status 6 (335.491644ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 16:02:45.593438 1529940 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-439544" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-439544 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ unpause │ -p old-k8s-version-912710 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-912710       │ jenkins │ v1.37.0 │ 13 Dec 25 15:56 UTC │ 13 Dec 25 15:56 UTC │
	│ delete  │ -p old-k8s-version-912710                                                                                                                                                                                                                                  │ old-k8s-version-912710       │ jenkins │ v1.37.0 │ 13 Dec 25 15:56 UTC │ 13 Dec 25 15:56 UTC │
	│ delete  │ -p old-k8s-version-912710                                                                                                                                                                                                                                  │ old-k8s-version-912710       │ jenkins │ v1.37.0 │ 13 Dec 25 15:56 UTC │ 13 Dec 25 15:56 UTC │
	│ start   │ -p embed-certs-270324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:56 UTC │ 13 Dec 25 15:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-270324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:57 UTC │ 13 Dec 25 15:57 UTC │
	│ stop    │ -p embed-certs-270324 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:57 UTC │ 13 Dec 25 15:58 UTC │
	│ addons  │ enable dashboard -p embed-certs-270324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:58 UTC │ 13 Dec 25 15:58 UTC │
	│ start   │ -p embed-certs-270324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:58 UTC │ 13 Dec 25 15:58 UTC │
	│ image   │ embed-certs-270324 image list --format=json                                                                                                                                                                                                                │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ pause   │ -p embed-certs-270324 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ unpause │ -p embed-certs-270324 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p embed-certs-270324                                                                                                                                                                                                                                      │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p embed-certs-270324                                                                                                                                                                                                                                      │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p disable-driver-mounts-614298                                                                                                                                                                                                                            │ disable-driver-mounts-614298 │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ start   │ -p default-k8s-diff-port-946932 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 16:00 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-946932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:00 UTC │ 13 Dec 25 16:00 UTC │
	│ stop    │ -p default-k8s-diff-port-946932 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:00 UTC │ 13 Dec 25 16:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-946932 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:01 UTC │ 13 Dec 25 16:01 UTC │
	│ start   │ -p default-k8s-diff-port-946932 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:01 UTC │ 13 Dec 25 16:01 UTC │
	│ image   │ default-k8s-diff-port-946932 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ pause   │ -p default-k8s-diff-port-946932 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ unpause │ -p default-k8s-diff-port-946932 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ delete  │ -p default-k8s-diff-port-946932                                                                                                                                                                                                                            │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ delete  │ -p default-k8s-diff-port-946932                                                                                                                                                                                                                            │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ start   │ -p newest-cni-526531 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 16:02:10
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 16:02:10.653265 1527131 out.go:360] Setting OutFile to fd 1 ...
	I1213 16:02:10.653450 1527131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:02:10.653463 1527131 out.go:374] Setting ErrFile to fd 2...
	I1213 16:02:10.653469 1527131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:02:10.653723 1527131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 16:02:10.654178 1527131 out.go:368] Setting JSON to false
	I1213 16:02:10.655121 1527131 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":27880,"bootTime":1765613851,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 16:02:10.655187 1527131 start.go:143] virtualization:  
	I1213 16:02:10.659173 1527131 out.go:179] * [newest-cni-526531] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 16:02:10.663186 1527131 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 16:02:10.663301 1527131 notify.go:221] Checking for updates...
	I1213 16:02:10.669662 1527131 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 16:02:10.672735 1527131 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:02:10.675695 1527131 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 16:02:10.678798 1527131 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 16:02:10.681784 1527131 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 16:02:10.685234 1527131 config.go:182] Loaded profile config "no-preload-439544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:02:10.685327 1527131 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 16:02:10.712873 1527131 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 16:02:10.712998 1527131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:02:10.776591 1527131 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:02:10.767542878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:02:10.776698 1527131 docker.go:319] overlay module found
	I1213 16:02:10.779851 1527131 out.go:179] * Using the docker driver based on user configuration
	I1213 16:02:10.782749 1527131 start.go:309] selected driver: docker
	I1213 16:02:10.782766 1527131 start.go:927] validating driver "docker" against <nil>
	I1213 16:02:10.782781 1527131 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 16:02:10.783532 1527131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:02:10.836394 1527131 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:02:10.826578222 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:02:10.836552 1527131 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 16:02:10.836580 1527131 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 16:02:10.836798 1527131 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 16:02:10.839799 1527131 out.go:179] * Using Docker driver with root privileges
	I1213 16:02:10.842710 1527131 cni.go:84] Creating CNI manager for ""
	I1213 16:02:10.842780 1527131 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:02:10.842796 1527131 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 16:02:10.842882 1527131 start.go:353] cluster config:
	{Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:02:10.846082 1527131 out.go:179] * Starting "newest-cni-526531" primary control-plane node in "newest-cni-526531" cluster
	I1213 16:02:10.848967 1527131 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 16:02:10.851950 1527131 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 16:02:10.854779 1527131 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:02:10.854844 1527131 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 16:02:10.854855 1527131 cache.go:65] Caching tarball of preloaded images
	I1213 16:02:10.854853 1527131 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 16:02:10.854953 1527131 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 16:02:10.854964 1527131 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 16:02:10.855092 1527131 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/config.json ...
	I1213 16:02:10.855111 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/config.json: {Name:mk86a24d01142c8f16a845d4170f48ade207872d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:10.882520 1527131 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 16:02:10.882541 1527131 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 16:02:10.882562 1527131 cache.go:243] Successfully downloaded all kic artifacts
	I1213 16:02:10.882591 1527131 start.go:360] acquireMachinesLock for newest-cni-526531: {Name:mk8328ad899404812480e264edd6e13cbbd26230 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:02:10.883398 1527131 start.go:364] duration metric: took 789.437µs to acquireMachinesLock for "newest-cni-526531"
	I1213 16:02:10.883434 1527131 start.go:93] Provisioning new machine with config: &{Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 16:02:10.883509 1527131 start.go:125] createHost starting for "" (driver="docker")
	I1213 16:02:10.886860 1527131 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 16:02:10.887084 1527131 start.go:159] libmachine.API.Create for "newest-cni-526531" (driver="docker")
	I1213 16:02:10.887118 1527131 client.go:173] LocalClient.Create starting
	I1213 16:02:10.887190 1527131 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem
	I1213 16:02:10.887231 1527131 main.go:143] libmachine: Decoding PEM data...
	I1213 16:02:10.887246 1527131 main.go:143] libmachine: Parsing certificate...
	I1213 16:02:10.887296 1527131 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem
	I1213 16:02:10.887414 1527131 main.go:143] libmachine: Decoding PEM data...
	I1213 16:02:10.887431 1527131 main.go:143] libmachine: Parsing certificate...
	I1213 16:02:10.887816 1527131 cli_runner.go:164] Run: docker network inspect newest-cni-526531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 16:02:10.908607 1527131 cli_runner.go:211] docker network inspect newest-cni-526531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 16:02:10.908685 1527131 network_create.go:284] running [docker network inspect newest-cni-526531] to gather additional debugging logs...
	I1213 16:02:10.908709 1527131 cli_runner.go:164] Run: docker network inspect newest-cni-526531
	W1213 16:02:10.924665 1527131 cli_runner.go:211] docker network inspect newest-cni-526531 returned with exit code 1
	I1213 16:02:10.924698 1527131 network_create.go:287] error running [docker network inspect newest-cni-526531]: docker network inspect newest-cni-526531: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-526531 not found
	I1213 16:02:10.924713 1527131 network_create.go:289] output of [docker network inspect newest-cni-526531]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-526531 not found
	
	** /stderr **
	I1213 16:02:10.924834 1527131 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 16:02:10.945123 1527131 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2ccf9a9eb2c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:9e:64:ed:f4:f2} reservation:<nil>}
	I1213 16:02:10.945400 1527131 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f2add06a95dc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:3f:19:f0:2f:b1} reservation:<nil>}
	I1213 16:02:10.945650 1527131 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8517ffe6861d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:d6:d4:51:8b:3d} reservation:<nil>}
	I1213 16:02:10.946092 1527131 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a39030}
	I1213 16:02:10.946118 1527131 network_create.go:124] attempt to create docker network newest-cni-526531 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 16:02:10.946180 1527131 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-526531 newest-cni-526531
	I1213 16:02:11.005690 1527131 network_create.go:108] docker network newest-cni-526531 192.168.76.0/24 created
	I1213 16:02:11.005737 1527131 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-526531" container
	I1213 16:02:11.005844 1527131 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 16:02:11.023684 1527131 cli_runner.go:164] Run: docker volume create newest-cni-526531 --label name.minikube.sigs.k8s.io=newest-cni-526531 --label created_by.minikube.sigs.k8s.io=true
	I1213 16:02:11.043087 1527131 oci.go:103] Successfully created a docker volume newest-cni-526531
	I1213 16:02:11.043189 1527131 cli_runner.go:164] Run: docker run --rm --name newest-cni-526531-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-526531 --entrypoint /usr/bin/test -v newest-cni-526531:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 16:02:11.614357 1527131 oci.go:107] Successfully prepared a docker volume newest-cni-526531
	I1213 16:02:11.614420 1527131 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:02:11.614431 1527131 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 16:02:11.614506 1527131 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-526531:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 16:02:15.477407 1527131 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-526531:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.862862091s)
	I1213 16:02:15.477459 1527131 kic.go:203] duration metric: took 3.863024311s to extract preloaded images to volume ...
	W1213 16:02:15.477597 1527131 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 16:02:15.477708 1527131 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 16:02:15.532223 1527131 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-526531 --name newest-cni-526531 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-526531 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-526531 --network newest-cni-526531 --ip 192.168.76.2 --volume newest-cni-526531:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 16:02:15.845102 1527131 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Running}}
	I1213 16:02:15.866861 1527131 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:02:15.892916 1527131 cli_runner.go:164] Run: docker exec newest-cni-526531 stat /var/lib/dpkg/alternatives/iptables
	I1213 16:02:15.948563 1527131 oci.go:144] the created container "newest-cni-526531" has a running status.
	I1213 16:02:15.948590 1527131 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa...
	I1213 16:02:16.266786 1527131 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 16:02:16.296564 1527131 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:02:16.329593 1527131 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 16:02:16.329619 1527131 kic_runner.go:114] Args: [docker exec --privileged newest-cni-526531 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 16:02:16.396781 1527131 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:02:16.416507 1527131 machine.go:94] provisionDockerMachine start ...
	I1213 16:02:16.416610 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:16.437096 1527131 main.go:143] libmachine: Using SSH client type: native
	I1213 16:02:16.437445 1527131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34223 <nil> <nil>}
	I1213 16:02:16.437455 1527131 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 16:02:16.438031 1527131 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45440->127.0.0.1:34223: read: connection reset by peer
	I1213 16:02:19.590785 1527131 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-526531
	
	I1213 16:02:19.590808 1527131 ubuntu.go:182] provisioning hostname "newest-cni-526531"
	I1213 16:02:19.590880 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:19.609205 1527131 main.go:143] libmachine: Using SSH client type: native
	I1213 16:02:19.609519 1527131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34223 <nil> <nil>}
	I1213 16:02:19.609531 1527131 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-526531 && echo "newest-cni-526531" | sudo tee /etc/hostname
	I1213 16:02:19.768653 1527131 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-526531
	
	I1213 16:02:19.768776 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:19.785859 1527131 main.go:143] libmachine: Using SSH client type: native
	I1213 16:02:19.786173 1527131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34223 <nil> <nil>}
	I1213 16:02:19.786190 1527131 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-526531' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-526531/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-526531' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 16:02:19.943619 1527131 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 16:02:19.943646 1527131 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 16:02:19.943683 1527131 ubuntu.go:190] setting up certificates
	I1213 16:02:19.943694 1527131 provision.go:84] configureAuth start
	I1213 16:02:19.943767 1527131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:02:19.960971 1527131 provision.go:143] copyHostCerts
	I1213 16:02:19.961044 1527131 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 16:02:19.961058 1527131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 16:02:19.961139 1527131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 16:02:19.961239 1527131 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 16:02:19.961249 1527131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 16:02:19.961277 1527131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 16:02:19.961346 1527131 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 16:02:19.961355 1527131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 16:02:19.961380 1527131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 16:02:19.961441 1527131 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.newest-cni-526531 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-526531]
	I1213 16:02:20.054612 1527131 provision.go:177] copyRemoteCerts
	I1213 16:02:20.054686 1527131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 16:02:20.054736 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.072851 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.179668 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 16:02:20.198845 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 16:02:20.217676 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 16:02:20.236010 1527131 provision.go:87] duration metric: took 292.302594ms to configureAuth
	I1213 16:02:20.236050 1527131 ubuntu.go:206] setting minikube options for container-runtime
	I1213 16:02:20.236287 1527131 config.go:182] Loaded profile config "newest-cni-526531": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:02:20.236298 1527131 machine.go:97] duration metric: took 3.819772251s to provisionDockerMachine
	I1213 16:02:20.236311 1527131 client.go:176] duration metric: took 9.349180869s to LocalClient.Create
	I1213 16:02:20.236333 1527131 start.go:167] duration metric: took 9.349249118s to libmachine.API.Create "newest-cni-526531"
	I1213 16:02:20.236344 1527131 start.go:293] postStartSetup for "newest-cni-526531" (driver="docker")
	I1213 16:02:20.236355 1527131 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 16:02:20.236412 1527131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 16:02:20.236459 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.253931 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.359511 1527131 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 16:02:20.363075 1527131 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 16:02:20.363102 1527131 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 16:02:20.363114 1527131 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 16:02:20.363170 1527131 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 16:02:20.363253 1527131 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 16:02:20.363383 1527131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 16:02:20.370977 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:02:20.388811 1527131 start.go:296] duration metric: took 152.451817ms for postStartSetup
	I1213 16:02:20.389184 1527131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:02:20.406647 1527131 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/config.json ...
	I1213 16:02:20.406930 1527131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 16:02:20.406975 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.424459 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.529476 1527131 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 16:02:20.539030 1527131 start.go:128] duration metric: took 9.655490819s to createHost
	I1213 16:02:20.539056 1527131 start.go:83] releasing machines lock for "newest-cni-526531", held for 9.655642684s
	I1213 16:02:20.539196 1527131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:02:20.566091 1527131 ssh_runner.go:195] Run: cat /version.json
	I1213 16:02:20.566128 1527131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 16:02:20.566142 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.566184 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.588830 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.608973 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.799431 1527131 ssh_runner.go:195] Run: systemctl --version
	I1213 16:02:20.806227 1527131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 16:02:20.810716 1527131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 16:02:20.810789 1527131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 16:02:20.839037 1527131 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 16:02:20.839104 1527131 start.go:496] detecting cgroup driver to use...
	I1213 16:02:20.839151 1527131 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 16:02:20.839236 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 16:02:20.854464 1527131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 16:02:20.867574 1527131 docker.go:218] disabling cri-docker service (if available) ...
	I1213 16:02:20.867669 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 16:02:20.885257 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 16:02:20.903596 1527131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 16:02:21.022899 1527131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 16:02:21.152487 1527131 docker.go:234] disabling docker service ...
	I1213 16:02:21.152550 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 16:02:21.174727 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 16:02:21.188382 1527131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 16:02:21.299657 1527131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 16:02:21.434130 1527131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 16:02:21.446805 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 16:02:21.461400 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 16:02:21.470517 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 16:02:21.479694 1527131 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 16:02:21.479759 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 16:02:21.494124 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:02:21.502957 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 16:02:21.512551 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:02:21.521611 1527131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 16:02:21.530083 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 16:02:21.539325 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 16:02:21.548742 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 16:02:21.557617 1527131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 16:02:21.565268 1527131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 16:02:21.572714 1527131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:02:21.683769 1527131 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 16:02:21.823560 1527131 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 16:02:21.823710 1527131 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 16:02:21.827515 1527131 start.go:564] Will wait 60s for crictl version
	I1213 16:02:21.827583 1527131 ssh_runner.go:195] Run: which crictl
	I1213 16:02:21.831175 1527131 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 16:02:21.854565 1527131 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 16:02:21.854637 1527131 ssh_runner.go:195] Run: containerd --version
	I1213 16:02:21.878720 1527131 ssh_runner.go:195] Run: containerd --version
	I1213 16:02:21.901809 1527131 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 16:02:21.904695 1527131 cli_runner.go:164] Run: docker network inspect newest-cni-526531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 16:02:21.920670 1527131 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 16:02:21.924637 1527131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:02:21.937646 1527131 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 16:02:21.940537 1527131 kubeadm.go:884] updating cluster {Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 16:02:21.940697 1527131 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:02:21.940787 1527131 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:02:21.972241 1527131 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:02:21.972268 1527131 containerd.go:534] Images already preloaded, skipping extraction
	I1213 16:02:21.972335 1527131 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:02:22.011228 1527131 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:02:22.011254 1527131 cache_images.go:86] Images are preloaded, skipping loading
	I1213 16:02:22.011263 1527131 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 16:02:22.011415 1527131 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-526531 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 16:02:22.011503 1527131 ssh_runner.go:195] Run: sudo crictl info
	I1213 16:02:22.037059 1527131 cni.go:84] Creating CNI manager for ""
	I1213 16:02:22.037085 1527131 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:02:22.037100 1527131 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 16:02:22.037123 1527131 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-526531 NodeName:newest-cni-526531 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 16:02:22.037245 1527131 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-526531"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 16:02:22.037324 1527131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 16:02:22.045616 1527131 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 16:02:22.045746 1527131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 16:02:22.054164 1527131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 16:02:22.068023 1527131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 16:02:22.085623 1527131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 16:02:22.101118 1527131 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 16:02:22.105257 1527131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:02:22.115696 1527131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:02:22.236674 1527131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:02:22.253725 1527131 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531 for IP: 192.168.76.2
	I1213 16:02:22.253801 1527131 certs.go:195] generating shared ca certs ...
	I1213 16:02:22.253832 1527131 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.254016 1527131 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 16:02:22.254124 1527131 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 16:02:22.254153 1527131 certs.go:257] generating profile certs ...
	I1213 16:02:22.254236 1527131 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.key
	I1213 16:02:22.254267 1527131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.crt with IP's: []
	I1213 16:02:22.746862 1527131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.crt ...
	I1213 16:02:22.746902 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.crt: {Name:mk7b618219326f9fba540570e126db6afef7db97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.747100 1527131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.key ...
	I1213 16:02:22.747113 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.key: {Name:mkadefb7fb5fbcd2154d988162829a52daab8655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.747208 1527131 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key.b007e6c7
	I1213 16:02:22.747225 1527131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt.b007e6c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1213 16:02:22.809461 1527131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt.b007e6c7 ...
	I1213 16:02:22.809493 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt.b007e6c7: {Name:mkce6931933926d60edd03298cb3538c188eea65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.809651 1527131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key.b007e6c7 ...
	I1213 16:02:22.809660 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key.b007e6c7: {Name:mk5267764b911bf176ac97c9b4dd7d199f6b5ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.809731 1527131 certs.go:382] copying /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt.b007e6c7 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt
	I1213 16:02:22.809817 1527131 certs.go:386] copying /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key.b007e6c7 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key
	I1213 16:02:22.809875 1527131 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key
	I1213 16:02:22.809898 1527131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.crt with IP's: []
	I1213 16:02:23.001038 1527131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.crt ...
	I1213 16:02:23.001077 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.crt: {Name:mk387ba28125d038f533411623a4bd220070ddcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:23.002037 1527131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key ...
	I1213 16:02:23.002079 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key: {Name:mk1a039510f32e55e5dd18d9c94a59fef628608a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:23.002321 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 16:02:23.002370 1527131 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 16:02:23.002380 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 16:02:23.002408 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 16:02:23.002444 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 16:02:23.002470 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 16:02:23.002520 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:02:23.003157 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 16:02:23.024481 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 16:02:23.042947 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 16:02:23.062246 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 16:02:23.080909 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 16:02:23.101609 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 16:02:23.121532 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 16:02:23.141397 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1213 16:02:23.162222 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 16:02:23.180800 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 16:02:23.199086 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 16:02:23.216531 1527131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 16:02:23.229620 1527131 ssh_runner.go:195] Run: openssl version
	I1213 16:02:23.236222 1527131 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 16:02:23.244051 1527131 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 16:02:23.251982 1527131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 16:02:23.255821 1527131 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 16:02:23.255903 1527131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 16:02:23.297335 1527131 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 16:02:23.305087 1527131 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12529342.pem /etc/ssl/certs/3ec20f2e.0
	I1213 16:02:23.312878 1527131 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:02:23.320527 1527131 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 16:02:23.328098 1527131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:02:23.331918 1527131 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:02:23.331997 1527131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:02:23.373256 1527131 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 16:02:23.381999 1527131 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 16:02:23.389673 1527131 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 16:02:23.397973 1527131 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 16:02:23.406099 1527131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 16:02:23.410027 1527131 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 16:02:23.410090 1527131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 16:02:23.453652 1527131 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 16:02:23.461102 1527131 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1252934.pem /etc/ssl/certs/51391683.0
	I1213 16:02:23.469641 1527131 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 16:02:23.473464 1527131 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 16:02:23.473520 1527131 kubeadm.go:401] StartCluster: {Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:02:23.473612 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 16:02:23.473675 1527131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 16:02:23.501906 1527131 cri.go:89] found id: ""
	I1213 16:02:23.501976 1527131 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 16:02:23.509856 1527131 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 16:02:23.517759 1527131 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 16:02:23.517824 1527131 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 16:02:23.525757 1527131 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 16:02:23.525778 1527131 kubeadm.go:158] found existing configuration files:
	
	I1213 16:02:23.525864 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 16:02:23.533675 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 16:02:23.533781 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 16:02:23.541421 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 16:02:23.549139 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 16:02:23.549209 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 16:02:23.556514 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 16:02:23.563859 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 16:02:23.563926 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 16:02:23.571345 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 16:02:23.578972 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 16:02:23.579034 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 16:02:23.588349 1527131 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 16:02:23.644568 1527131 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 16:02:23.644844 1527131 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 16:02:23.719501 1527131 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 16:02:23.719596 1527131 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 16:02:23.719638 1527131 kubeadm.go:319] OS: Linux
	I1213 16:02:23.719695 1527131 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 16:02:23.719756 1527131 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 16:02:23.719822 1527131 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 16:02:23.719885 1527131 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 16:02:23.719948 1527131 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 16:02:23.720014 1527131 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 16:02:23.720065 1527131 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 16:02:23.720126 1527131 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 16:02:23.720184 1527131 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 16:02:23.799280 1527131 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 16:02:23.799447 1527131 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 16:02:23.799586 1527131 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 16:02:23.813871 1527131 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 16:02:23.820586 1527131 out.go:252]   - Generating certificates and keys ...
	I1213 16:02:23.820722 1527131 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 16:02:23.820831 1527131 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 16:02:24.062915 1527131 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 16:02:24.119432 1527131 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 16:02:24.837877 1527131 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 16:02:25.323783 1527131 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 16:02:25.382177 1527131 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 16:02:25.382477 1527131 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-526531] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 16:02:25.533405 1527131 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 16:02:25.533842 1527131 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-526531] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 16:02:25.796805 1527131 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 16:02:25.975896 1527131 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 16:02:26.105650 1527131 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 16:02:26.105962 1527131 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 16:02:26.444172 1527131 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 16:02:26.939066 1527131 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 16:02:27.121431 1527131 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 16:02:27.579446 1527131 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 16:02:27.628725 1527131 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 16:02:27.629390 1527131 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 16:02:27.631991 1527131 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 16:02:27.635735 1527131 out.go:252]   - Booting up control plane ...
	I1213 16:02:27.635847 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 16:02:27.635926 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 16:02:27.635993 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 16:02:27.657055 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 16:02:27.657166 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 16:02:27.664926 1527131 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 16:02:27.665403 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 16:02:27.665639 1527131 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 16:02:27.803169 1527131 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 16:02:27.803302 1527131 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 16:02:41.545926 1500765 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 16:02:41.546108 1500765 kubeadm.go:319] 
	I1213 16:02:41.546236 1500765 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 16:02:41.551134 1500765 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 16:02:41.551190 1500765 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 16:02:41.551289 1500765 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 16:02:41.551373 1500765 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 16:02:41.551414 1500765 kubeadm.go:319] OS: Linux
	I1213 16:02:41.551459 1500765 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 16:02:41.551511 1500765 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 16:02:41.551561 1500765 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 16:02:41.551612 1500765 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 16:02:41.551663 1500765 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 16:02:41.551715 1500765 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 16:02:41.551764 1500765 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 16:02:41.551816 1500765 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 16:02:41.551866 1500765 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 16:02:41.551941 1500765 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 16:02:41.552042 1500765 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 16:02:41.552133 1500765 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 16:02:41.552199 1500765 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 16:02:41.555522 1500765 out.go:252]   - Generating certificates and keys ...
	I1213 16:02:41.555641 1500765 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 16:02:41.555717 1500765 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 16:02:41.555797 1500765 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 16:02:41.555873 1500765 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 16:02:41.555970 1500765 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 16:02:41.556031 1500765 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 16:02:41.556110 1500765 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 16:02:41.556213 1500765 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 16:02:41.556310 1500765 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 16:02:41.556431 1500765 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 16:02:41.556486 1500765 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 16:02:41.556559 1500765 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 16:02:41.556617 1500765 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 16:02:41.556678 1500765 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 16:02:41.556736 1500765 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 16:02:41.556817 1500765 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 16:02:41.556888 1500765 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 16:02:41.556980 1500765 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 16:02:41.557075 1500765 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 16:02:41.560042 1500765 out.go:252]   - Booting up control plane ...
	I1213 16:02:41.560143 1500765 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 16:02:41.560258 1500765 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 16:02:41.560348 1500765 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 16:02:41.560479 1500765 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 16:02:41.560588 1500765 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 16:02:41.560701 1500765 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 16:02:41.560824 1500765 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 16:02:41.560880 1500765 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 16:02:41.561017 1500765 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 16:02:41.561131 1500765 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 16:02:41.561233 1500765 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000293839s
	I1213 16:02:41.561265 1500765 kubeadm.go:319] 
	I1213 16:02:41.561329 1500765 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 16:02:41.561367 1500765 kubeadm.go:319] 	- The kubelet is not running
	I1213 16:02:41.561492 1500765 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 16:02:41.561506 1500765 kubeadm.go:319] 
	I1213 16:02:41.561630 1500765 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 16:02:41.561673 1500765 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 16:02:41.561708 1500765 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 16:02:41.561776 1500765 kubeadm.go:319] 
	I1213 16:02:41.561777 1500765 kubeadm.go:403] duration metric: took 8m8.131517099s to StartCluster
	I1213 16:02:41.561824 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:02:41.561903 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:02:41.594564 1500765 cri.go:89] found id: ""
	I1213 16:02:41.594594 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.594603 1500765 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:02:41.594609 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:02:41.594677 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:02:41.629231 1500765 cri.go:89] found id: ""
	I1213 16:02:41.629252 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.629260 1500765 logs.go:284] No container was found matching "etcd"
	I1213 16:02:41.629266 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:02:41.629322 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:02:41.656157 1500765 cri.go:89] found id: ""
	I1213 16:02:41.656181 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.656190 1500765 logs.go:284] No container was found matching "coredns"
	I1213 16:02:41.656196 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:02:41.656276 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:02:41.681173 1500765 cri.go:89] found id: ""
	I1213 16:02:41.681208 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.681217 1500765 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:02:41.681224 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:02:41.681308 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:02:41.708543 1500765 cri.go:89] found id: ""
	I1213 16:02:41.708568 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.708577 1500765 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:02:41.708583 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:02:41.708660 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:02:41.737039 1500765 cri.go:89] found id: ""
	I1213 16:02:41.737062 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.737071 1500765 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:02:41.737079 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:02:41.737137 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:02:41.762249 1500765 cri.go:89] found id: ""
	I1213 16:02:41.762275 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.762283 1500765 logs.go:284] No container was found matching "kindnet"
	I1213 16:02:41.762294 1500765 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:02:41.762306 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:02:41.828774 1500765 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:02:41.820905    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.821458    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.823177    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.823750    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.825347    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:02:41.820905    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.821458    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.823177    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.823750    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.825347    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:02:41.828797 1500765 logs.go:123] Gathering logs for containerd ...
	I1213 16:02:41.828810 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:02:41.870479 1500765 logs.go:123] Gathering logs for container status ...
	I1213 16:02:41.870512 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:02:41.897347 1500765 logs.go:123] Gathering logs for kubelet ...
	I1213 16:02:41.897374 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:02:41.954515 1500765 logs.go:123] Gathering logs for dmesg ...
	I1213 16:02:41.954549 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 16:02:41.971648 1500765 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000293839s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 16:02:41.971703 1500765 out.go:285] * 
	W1213 16:02:41.971970 1500765 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000293839s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 16:02:41.971990 1500765 out.go:285] * 
	W1213 16:02:41.974206 1500765 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 16:02:41.979727 1500765 out.go:203] 
	W1213 16:02:41.982586 1500765 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000293839s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 16:02:41.982624 1500765 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 16:02:41.982645 1500765 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 16:02:41.985873 1500765 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 15:54:23 no-preload-439544 containerd[760]: time="2025-12-13T15:54:23.378148111Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:24 no-preload-439544 containerd[760]: time="2025-12-13T15:54:24.600685306Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 13 15:54:24 no-preload-439544 containerd[760]: time="2025-12-13T15:54:24.603732906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 13 15:54:24 no-preload-439544 containerd[760]: time="2025-12-13T15:54:24.611915029Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:24 no-preload-439544 containerd[760]: time="2025-12-13T15:54:24.613116551Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:25 no-preload-439544 containerd[760]: time="2025-12-13T15:54:25.503138610Z" level=info msg="No images store for sha256:84ea4651cf4d4486006d1346129c6964687be99508987d0ca606406fbc15a298"
	Dec 13 15:54:25 no-preload-439544 containerd[760]: time="2025-12-13T15:54:25.506879683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\""
	Dec 13 15:54:25 no-preload-439544 containerd[760]: time="2025-12-13T15:54:25.528281020Z" level=info msg="ImageCreate event name:\"sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:25 no-preload-439544 containerd[760]: time="2025-12-13T15:54:25.529509930Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:27 no-preload-439544 containerd[760]: time="2025-12-13T15:54:27.056611379Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 13 15:54:27 no-preload-439544 containerd[760]: time="2025-12-13T15:54:27.059970700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 13 15:54:27 no-preload-439544 containerd[760]: time="2025-12-13T15:54:27.072962113Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:27 no-preload-439544 containerd[760]: time="2025-12-13T15:54:27.074433027Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:28 no-preload-439544 containerd[760]: time="2025-12-13T15:54:28.221784082Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 13 15:54:28 no-preload-439544 containerd[760]: time="2025-12-13T15:54:28.224970821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 13 15:54:28 no-preload-439544 containerd[760]: time="2025-12-13T15:54:28.232633350Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:28 no-preload-439544 containerd[760]: time="2025-12-13T15:54:28.233266000Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.393544387Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.395762984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.407681609Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.408407697Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.791409724Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.793787530Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.800749932Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.801079615Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:02:46.244574    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:46.245067    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:46.246783    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:46.247184    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:46.248693    5836 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.759368] overlayfs: idmapped layers are currently not supported
	[Dec13 13:37] overlayfs: idmapped layers are currently not supported
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 16:02:46 up  7:45,  0 user,  load average: 1.32, 1.71, 1.84
	Linux no-preload-439544 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 16:02:43 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:02:43 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 13 16:02:43 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:02:43 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:02:43 no-preload-439544 kubelet[5606]: E1213 16:02:43.878177    5606 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:02:43 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:02:43 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:02:44 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 13 16:02:44 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:02:44 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:02:44 no-preload-439544 kubelet[5701]: E1213 16:02:44.646715    5701 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:02:44 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:02:44 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:02:45 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 13 16:02:45 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:02:45 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:02:45 no-preload-439544 kubelet[5731]: E1213 16:02:45.402542    5731 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:02:45 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:02:45 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:02:46 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 13 16:02:46 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:02:46 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:02:46 no-preload-439544 kubelet[5808]: E1213 16:02:46.143079    5808 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:02:46 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:02:46 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-439544 -n no-preload-439544
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-439544 -n no-preload-439544: exit status 6 (346.881438ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 16:02:46.693530 1530168 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-439544" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-439544" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (3.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (114.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-439544 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1213 16:03:06.740760 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:03:23.671468 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:03:42.554815 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-439544 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m52.462607816s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-439544 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-439544 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-439544 describe deploy/metrics-server -n kube-system: exit status 1 (55.596043ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-439544" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-439544 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-439544
helpers_test.go:244: (dbg) docker inspect no-preload-439544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d",
	        "Created": "2025-12-13T15:54:12.178460013Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1501116,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T15:54:12.242684028Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/hostname",
	        "HostsPath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/hosts",
	        "LogPath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d-json.log",
	        "Name": "/no-preload-439544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-439544:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-439544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d",
	                "LowerDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-439544",
	                "Source": "/var/lib/docker/volumes/no-preload-439544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-439544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-439544",
	                "name.minikube.sigs.k8s.io": "no-preload-439544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da8c56f1648b4b29d365160a5c9c8f4b83511f3b06bb300dab72442b5fe339b6",
	            "SandboxKey": "/var/run/docker/netns/da8c56f1648b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34193"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34194"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34197"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34195"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34196"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-439544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:8c:8a:2b:c2:e1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "77a202cfc0163f00e436e53caf7341626192055eeb5da6a6f5d953ced7f7adfb",
	                    "EndpointID": "2d33c5fac6c3fc25d8e7af1d5a5218284f13ab87b543c41deb4d4804231c62b5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-439544",
	                        "53d57adb0653"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-439544 -n no-preload-439544
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-439544 -n no-preload-439544: exit status 6 (354.046278ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 16:04:39.586697 1532112 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-439544" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-439544 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p old-k8s-version-912710                                                                                                                                                                                                                                  │ old-k8s-version-912710       │ jenkins │ v1.37.0 │ 13 Dec 25 15:56 UTC │ 13 Dec 25 15:56 UTC │
	│ delete  │ -p old-k8s-version-912710                                                                                                                                                                                                                                  │ old-k8s-version-912710       │ jenkins │ v1.37.0 │ 13 Dec 25 15:56 UTC │ 13 Dec 25 15:56 UTC │
	│ start   │ -p embed-certs-270324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:56 UTC │ 13 Dec 25 15:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-270324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:57 UTC │ 13 Dec 25 15:57 UTC │
	│ stop    │ -p embed-certs-270324 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:57 UTC │ 13 Dec 25 15:58 UTC │
	│ addons  │ enable dashboard -p embed-certs-270324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:58 UTC │ 13 Dec 25 15:58 UTC │
	│ start   │ -p embed-certs-270324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:58 UTC │ 13 Dec 25 15:58 UTC │
	│ image   │ embed-certs-270324 image list --format=json                                                                                                                                                                                                                │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ pause   │ -p embed-certs-270324 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ unpause │ -p embed-certs-270324 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p embed-certs-270324                                                                                                                                                                                                                                      │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p embed-certs-270324                                                                                                                                                                                                                                      │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p disable-driver-mounts-614298                                                                                                                                                                                                                            │ disable-driver-mounts-614298 │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ start   │ -p default-k8s-diff-port-946932 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 16:00 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-946932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:00 UTC │ 13 Dec 25 16:00 UTC │
	│ stop    │ -p default-k8s-diff-port-946932 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:00 UTC │ 13 Dec 25 16:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-946932 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:01 UTC │ 13 Dec 25 16:01 UTC │
	│ start   │ -p default-k8s-diff-port-946932 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:01 UTC │ 13 Dec 25 16:01 UTC │
	│ image   │ default-k8s-diff-port-946932 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ pause   │ -p default-k8s-diff-port-946932 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ unpause │ -p default-k8s-diff-port-946932 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ delete  │ -p default-k8s-diff-port-946932                                                                                                                                                                                                                            │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ delete  │ -p default-k8s-diff-port-946932                                                                                                                                                                                                                            │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ start   │ -p newest-cni-526531 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-439544 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 16:02:10
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 16:02:10.653265 1527131 out.go:360] Setting OutFile to fd 1 ...
	I1213 16:02:10.653450 1527131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:02:10.653463 1527131 out.go:374] Setting ErrFile to fd 2...
	I1213 16:02:10.653469 1527131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:02:10.653723 1527131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 16:02:10.654178 1527131 out.go:368] Setting JSON to false
	I1213 16:02:10.655121 1527131 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":27880,"bootTime":1765613851,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 16:02:10.655187 1527131 start.go:143] virtualization:  
	I1213 16:02:10.659173 1527131 out.go:179] * [newest-cni-526531] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 16:02:10.663186 1527131 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 16:02:10.663301 1527131 notify.go:221] Checking for updates...
	I1213 16:02:10.669662 1527131 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 16:02:10.672735 1527131 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:02:10.675695 1527131 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 16:02:10.678798 1527131 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 16:02:10.681784 1527131 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 16:02:10.685234 1527131 config.go:182] Loaded profile config "no-preload-439544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:02:10.685327 1527131 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 16:02:10.712873 1527131 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 16:02:10.712998 1527131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:02:10.776591 1527131 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:02:10.767542878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:02:10.776698 1527131 docker.go:319] overlay module found
	I1213 16:02:10.779851 1527131 out.go:179] * Using the docker driver based on user configuration
	I1213 16:02:10.782749 1527131 start.go:309] selected driver: docker
	I1213 16:02:10.782766 1527131 start.go:927] validating driver "docker" against <nil>
	I1213 16:02:10.782781 1527131 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 16:02:10.783532 1527131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:02:10.836394 1527131 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:02:10.826578222 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:02:10.836552 1527131 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 16:02:10.836580 1527131 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 16:02:10.836798 1527131 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 16:02:10.839799 1527131 out.go:179] * Using Docker driver with root privileges
	I1213 16:02:10.842710 1527131 cni.go:84] Creating CNI manager for ""
	I1213 16:02:10.842780 1527131 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:02:10.842796 1527131 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 16:02:10.842882 1527131 start.go:353] cluster config:
	{Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:02:10.846082 1527131 out.go:179] * Starting "newest-cni-526531" primary control-plane node in "newest-cni-526531" cluster
	I1213 16:02:10.848967 1527131 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 16:02:10.851950 1527131 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 16:02:10.854779 1527131 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:02:10.854844 1527131 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 16:02:10.854855 1527131 cache.go:65] Caching tarball of preloaded images
	I1213 16:02:10.854853 1527131 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 16:02:10.854953 1527131 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 16:02:10.854964 1527131 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 16:02:10.855092 1527131 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/config.json ...
	I1213 16:02:10.855111 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/config.json: {Name:mk86a24d01142c8f16a845d4170f48ade207872d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:10.882520 1527131 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 16:02:10.882541 1527131 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 16:02:10.882562 1527131 cache.go:243] Successfully downloaded all kic artifacts
	I1213 16:02:10.882591 1527131 start.go:360] acquireMachinesLock for newest-cni-526531: {Name:mk8328ad899404812480e264edd6e13cbbd26230 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:02:10.883398 1527131 start.go:364] duration metric: took 789.437µs to acquireMachinesLock for "newest-cni-526531"
	I1213 16:02:10.883434 1527131 start.go:93] Provisioning new machine with config: &{Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 16:02:10.883509 1527131 start.go:125] createHost starting for "" (driver="docker")
	I1213 16:02:10.886860 1527131 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 16:02:10.887084 1527131 start.go:159] libmachine.API.Create for "newest-cni-526531" (driver="docker")
	I1213 16:02:10.887118 1527131 client.go:173] LocalClient.Create starting
	I1213 16:02:10.887190 1527131 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem
	I1213 16:02:10.887231 1527131 main.go:143] libmachine: Decoding PEM data...
	I1213 16:02:10.887246 1527131 main.go:143] libmachine: Parsing certificate...
	I1213 16:02:10.887296 1527131 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem
	I1213 16:02:10.887414 1527131 main.go:143] libmachine: Decoding PEM data...
	I1213 16:02:10.887431 1527131 main.go:143] libmachine: Parsing certificate...
	I1213 16:02:10.887816 1527131 cli_runner.go:164] Run: docker network inspect newest-cni-526531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 16:02:10.908607 1527131 cli_runner.go:211] docker network inspect newest-cni-526531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 16:02:10.908685 1527131 network_create.go:284] running [docker network inspect newest-cni-526531] to gather additional debugging logs...
	I1213 16:02:10.908709 1527131 cli_runner.go:164] Run: docker network inspect newest-cni-526531
	W1213 16:02:10.924665 1527131 cli_runner.go:211] docker network inspect newest-cni-526531 returned with exit code 1
	I1213 16:02:10.924698 1527131 network_create.go:287] error running [docker network inspect newest-cni-526531]: docker network inspect newest-cni-526531: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-526531 not found
	I1213 16:02:10.924713 1527131 network_create.go:289] output of [docker network inspect newest-cni-526531]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-526531 not found
	
	** /stderr **
	I1213 16:02:10.924834 1527131 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 16:02:10.945123 1527131 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2ccf9a9eb2c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:9e:64:ed:f4:f2} reservation:<nil>}
	I1213 16:02:10.945400 1527131 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f2add06a95dc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:3f:19:f0:2f:b1} reservation:<nil>}
	I1213 16:02:10.945650 1527131 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8517ffe6861d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:d6:d4:51:8b:3d} reservation:<nil>}
	I1213 16:02:10.946092 1527131 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a39030}
	I1213 16:02:10.946118 1527131 network_create.go:124] attempt to create docker network newest-cni-526531 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 16:02:10.946180 1527131 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-526531 newest-cni-526531
	I1213 16:02:11.005690 1527131 network_create.go:108] docker network newest-cni-526531 192.168.76.0/24 created
	I1213 16:02:11.005737 1527131 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-526531" container
	I1213 16:02:11.005844 1527131 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 16:02:11.023684 1527131 cli_runner.go:164] Run: docker volume create newest-cni-526531 --label name.minikube.sigs.k8s.io=newest-cni-526531 --label created_by.minikube.sigs.k8s.io=true
	I1213 16:02:11.043087 1527131 oci.go:103] Successfully created a docker volume newest-cni-526531
	I1213 16:02:11.043189 1527131 cli_runner.go:164] Run: docker run --rm --name newest-cni-526531-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-526531 --entrypoint /usr/bin/test -v newest-cni-526531:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 16:02:11.614357 1527131 oci.go:107] Successfully prepared a docker volume newest-cni-526531
	I1213 16:02:11.614420 1527131 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:02:11.614431 1527131 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 16:02:11.614506 1527131 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-526531:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 16:02:15.477407 1527131 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-526531:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.862862091s)
	I1213 16:02:15.477459 1527131 kic.go:203] duration metric: took 3.863024311s to extract preloaded images to volume ...
	W1213 16:02:15.477597 1527131 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 16:02:15.477708 1527131 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 16:02:15.532223 1527131 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-526531 --name newest-cni-526531 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-526531 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-526531 --network newest-cni-526531 --ip 192.168.76.2 --volume newest-cni-526531:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 16:02:15.845102 1527131 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Running}}
	I1213 16:02:15.866861 1527131 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:02:15.892916 1527131 cli_runner.go:164] Run: docker exec newest-cni-526531 stat /var/lib/dpkg/alternatives/iptables
	I1213 16:02:15.948563 1527131 oci.go:144] the created container "newest-cni-526531" has a running status.
	I1213 16:02:15.948590 1527131 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa...
	I1213 16:02:16.266786 1527131 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 16:02:16.296564 1527131 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:02:16.329593 1527131 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 16:02:16.329619 1527131 kic_runner.go:114] Args: [docker exec --privileged newest-cni-526531 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 16:02:16.396781 1527131 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:02:16.416507 1527131 machine.go:94] provisionDockerMachine start ...
	I1213 16:02:16.416610 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:16.437096 1527131 main.go:143] libmachine: Using SSH client type: native
	I1213 16:02:16.437445 1527131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34223 <nil> <nil>}
	I1213 16:02:16.437455 1527131 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 16:02:16.438031 1527131 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45440->127.0.0.1:34223: read: connection reset by peer
	I1213 16:02:19.590785 1527131 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-526531
	
	I1213 16:02:19.590808 1527131 ubuntu.go:182] provisioning hostname "newest-cni-526531"
	I1213 16:02:19.590880 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:19.609205 1527131 main.go:143] libmachine: Using SSH client type: native
	I1213 16:02:19.609519 1527131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34223 <nil> <nil>}
	I1213 16:02:19.609531 1527131 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-526531 && echo "newest-cni-526531" | sudo tee /etc/hostname
	I1213 16:02:19.768653 1527131 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-526531
	
	I1213 16:02:19.768776 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:19.785859 1527131 main.go:143] libmachine: Using SSH client type: native
	I1213 16:02:19.786173 1527131 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34223 <nil> <nil>}
	I1213 16:02:19.786190 1527131 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-526531' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-526531/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-526531' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 16:02:19.943619 1527131 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 16:02:19.943646 1527131 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 16:02:19.943683 1527131 ubuntu.go:190] setting up certificates
	I1213 16:02:19.943694 1527131 provision.go:84] configureAuth start
	I1213 16:02:19.943767 1527131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:02:19.960971 1527131 provision.go:143] copyHostCerts
	I1213 16:02:19.961044 1527131 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 16:02:19.961058 1527131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 16:02:19.961139 1527131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 16:02:19.961239 1527131 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 16:02:19.961249 1527131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 16:02:19.961277 1527131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 16:02:19.961346 1527131 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 16:02:19.961355 1527131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 16:02:19.961380 1527131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 16:02:19.961441 1527131 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.newest-cni-526531 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-526531]
	I1213 16:02:20.054612 1527131 provision.go:177] copyRemoteCerts
	I1213 16:02:20.054686 1527131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 16:02:20.054736 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.072851 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.179668 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 16:02:20.198845 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 16:02:20.217676 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 16:02:20.236010 1527131 provision.go:87] duration metric: took 292.302594ms to configureAuth
	I1213 16:02:20.236050 1527131 ubuntu.go:206] setting minikube options for container-runtime
	I1213 16:02:20.236287 1527131 config.go:182] Loaded profile config "newest-cni-526531": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:02:20.236298 1527131 machine.go:97] duration metric: took 3.819772251s to provisionDockerMachine
	I1213 16:02:20.236311 1527131 client.go:176] duration metric: took 9.349180869s to LocalClient.Create
	I1213 16:02:20.236333 1527131 start.go:167] duration metric: took 9.349249118s to libmachine.API.Create "newest-cni-526531"
	I1213 16:02:20.236344 1527131 start.go:293] postStartSetup for "newest-cni-526531" (driver="docker")
	I1213 16:02:20.236355 1527131 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 16:02:20.236412 1527131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 16:02:20.236459 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.253931 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.359511 1527131 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 16:02:20.363075 1527131 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 16:02:20.363102 1527131 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 16:02:20.363114 1527131 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 16:02:20.363170 1527131 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 16:02:20.363253 1527131 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 16:02:20.363383 1527131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 16:02:20.370977 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:02:20.388811 1527131 start.go:296] duration metric: took 152.451817ms for postStartSetup
	I1213 16:02:20.389184 1527131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:02:20.406647 1527131 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/config.json ...
	I1213 16:02:20.406930 1527131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 16:02:20.406975 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.424459 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.529476 1527131 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 16:02:20.539030 1527131 start.go:128] duration metric: took 9.655490819s to createHost
	I1213 16:02:20.539056 1527131 start.go:83] releasing machines lock for "newest-cni-526531", held for 9.655642684s
	I1213 16:02:20.539196 1527131 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:02:20.566091 1527131 ssh_runner.go:195] Run: cat /version.json
	I1213 16:02:20.566128 1527131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 16:02:20.566142 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.566184 1527131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:02:20.588830 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.608973 1527131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34223 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:02:20.799431 1527131 ssh_runner.go:195] Run: systemctl --version
	I1213 16:02:20.806227 1527131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 16:02:20.810716 1527131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 16:02:20.810789 1527131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 16:02:20.839037 1527131 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 16:02:20.839104 1527131 start.go:496] detecting cgroup driver to use...
	I1213 16:02:20.839151 1527131 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 16:02:20.839236 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 16:02:20.854464 1527131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 16:02:20.867574 1527131 docker.go:218] disabling cri-docker service (if available) ...
	I1213 16:02:20.867669 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 16:02:20.885257 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 16:02:20.903596 1527131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 16:02:21.022899 1527131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 16:02:21.152487 1527131 docker.go:234] disabling docker service ...
	I1213 16:02:21.152550 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 16:02:21.174727 1527131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 16:02:21.188382 1527131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 16:02:21.299657 1527131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 16:02:21.434130 1527131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 16:02:21.446805 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 16:02:21.461400 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 16:02:21.470517 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 16:02:21.479694 1527131 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 16:02:21.479759 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 16:02:21.494124 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:02:21.502957 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 16:02:21.512551 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:02:21.521611 1527131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 16:02:21.530083 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 16:02:21.539325 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 16:02:21.548742 1527131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 16:02:21.557617 1527131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 16:02:21.565268 1527131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 16:02:21.572714 1527131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:02:21.683769 1527131 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 16:02:21.823560 1527131 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 16:02:21.823710 1527131 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 16:02:21.827515 1527131 start.go:564] Will wait 60s for crictl version
	I1213 16:02:21.827583 1527131 ssh_runner.go:195] Run: which crictl
	I1213 16:02:21.831175 1527131 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 16:02:21.854565 1527131 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 16:02:21.854637 1527131 ssh_runner.go:195] Run: containerd --version
	I1213 16:02:21.878720 1527131 ssh_runner.go:195] Run: containerd --version
	I1213 16:02:21.901809 1527131 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 16:02:21.904695 1527131 cli_runner.go:164] Run: docker network inspect newest-cni-526531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 16:02:21.920670 1527131 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 16:02:21.924637 1527131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:02:21.937646 1527131 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 16:02:21.940537 1527131 kubeadm.go:884] updating cluster {Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 16:02:21.940697 1527131 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:02:21.940787 1527131 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:02:21.972241 1527131 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:02:21.972268 1527131 containerd.go:534] Images already preloaded, skipping extraction
	I1213 16:02:21.972335 1527131 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:02:22.011228 1527131 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:02:22.011254 1527131 cache_images.go:86] Images are preloaded, skipping loading
	I1213 16:02:22.011263 1527131 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 16:02:22.011415 1527131 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-526531 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 16:02:22.011503 1527131 ssh_runner.go:195] Run: sudo crictl info
	I1213 16:02:22.037059 1527131 cni.go:84] Creating CNI manager for ""
	I1213 16:02:22.037085 1527131 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:02:22.037100 1527131 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 16:02:22.037123 1527131 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-526531 NodeName:newest-cni-526531 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 16:02:22.037245 1527131 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-526531"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 16:02:22.037324 1527131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 16:02:22.045616 1527131 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 16:02:22.045746 1527131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 16:02:22.054164 1527131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 16:02:22.068023 1527131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 16:02:22.085623 1527131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 16:02:22.101118 1527131 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 16:02:22.105257 1527131 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:02:22.115696 1527131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:02:22.236674 1527131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:02:22.253725 1527131 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531 for IP: 192.168.76.2
	I1213 16:02:22.253801 1527131 certs.go:195] generating shared ca certs ...
	I1213 16:02:22.253832 1527131 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.254016 1527131 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 16:02:22.254124 1527131 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 16:02:22.254153 1527131 certs.go:257] generating profile certs ...
	I1213 16:02:22.254236 1527131 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.key
	I1213 16:02:22.254267 1527131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.crt with IP's: []
	I1213 16:02:22.746862 1527131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.crt ...
	I1213 16:02:22.746902 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.crt: {Name:mk7b618219326f9fba540570e126db6afef7db97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.747100 1527131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.key ...
	I1213 16:02:22.747113 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.key: {Name:mkadefb7fb5fbcd2154d988162829a52daab8655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.747208 1527131 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key.b007e6c7
	I1213 16:02:22.747225 1527131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt.b007e6c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1213 16:02:22.809461 1527131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt.b007e6c7 ...
	I1213 16:02:22.809493 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt.b007e6c7: {Name:mkce6931933926d60edd03298cb3538c188eea65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.809651 1527131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key.b007e6c7 ...
	I1213 16:02:22.809660 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key.b007e6c7: {Name:mk5267764b911bf176ac97c9b4dd7d199f6b5ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:22.809731 1527131 certs.go:382] copying /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt.b007e6c7 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt
	I1213 16:02:22.809817 1527131 certs.go:386] copying /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key.b007e6c7 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key
	I1213 16:02:22.809875 1527131 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key
	I1213 16:02:22.809898 1527131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.crt with IP's: []
	I1213 16:02:23.001038 1527131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.crt ...
	I1213 16:02:23.001077 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.crt: {Name:mk387ba28125d038f533411623a4bd220070ddcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:23.002037 1527131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key ...
	I1213 16:02:23.002079 1527131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key: {Name:mk1a039510f32e55e5dd18d9c94a59fef628608a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:02:23.002321 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 16:02:23.002370 1527131 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 16:02:23.002380 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 16:02:23.002408 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 16:02:23.002444 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 16:02:23.002470 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 16:02:23.002520 1527131 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:02:23.003157 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 16:02:23.024481 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 16:02:23.042947 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 16:02:23.062246 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 16:02:23.080909 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 16:02:23.101609 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 16:02:23.121532 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 16:02:23.141397 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1213 16:02:23.162222 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 16:02:23.180800 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 16:02:23.199086 1527131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 16:02:23.216531 1527131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 16:02:23.229620 1527131 ssh_runner.go:195] Run: openssl version
	I1213 16:02:23.236222 1527131 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 16:02:23.244051 1527131 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 16:02:23.251982 1527131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 16:02:23.255821 1527131 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 16:02:23.255903 1527131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 16:02:23.297335 1527131 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 16:02:23.305087 1527131 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12529342.pem /etc/ssl/certs/3ec20f2e.0
	I1213 16:02:23.312878 1527131 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:02:23.320527 1527131 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 16:02:23.328098 1527131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:02:23.331918 1527131 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:02:23.331997 1527131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:02:23.373256 1527131 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 16:02:23.381999 1527131 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 16:02:23.389673 1527131 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 16:02:23.397973 1527131 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 16:02:23.406099 1527131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 16:02:23.410027 1527131 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 16:02:23.410090 1527131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 16:02:23.453652 1527131 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 16:02:23.461102 1527131 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1252934.pem /etc/ssl/certs/51391683.0
	I1213 16:02:23.469641 1527131 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 16:02:23.473464 1527131 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 16:02:23.473520 1527131 kubeadm.go:401] StartCluster: {Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:02:23.473612 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 16:02:23.473675 1527131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 16:02:23.501906 1527131 cri.go:89] found id: ""
	I1213 16:02:23.501976 1527131 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 16:02:23.509856 1527131 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 16:02:23.517759 1527131 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 16:02:23.517824 1527131 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 16:02:23.525757 1527131 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 16:02:23.525778 1527131 kubeadm.go:158] found existing configuration files:
	
	I1213 16:02:23.525864 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 16:02:23.533675 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 16:02:23.533781 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 16:02:23.541421 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 16:02:23.549139 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 16:02:23.549209 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 16:02:23.556514 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 16:02:23.563859 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 16:02:23.563926 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 16:02:23.571345 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 16:02:23.578972 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 16:02:23.579034 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 16:02:23.588349 1527131 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 16:02:23.644568 1527131 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 16:02:23.644844 1527131 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 16:02:23.719501 1527131 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 16:02:23.719596 1527131 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 16:02:23.719638 1527131 kubeadm.go:319] OS: Linux
	I1213 16:02:23.719695 1527131 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 16:02:23.719756 1527131 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 16:02:23.719822 1527131 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 16:02:23.719885 1527131 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 16:02:23.719948 1527131 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 16:02:23.720014 1527131 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 16:02:23.720065 1527131 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 16:02:23.720126 1527131 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 16:02:23.720184 1527131 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 16:02:23.799280 1527131 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 16:02:23.799447 1527131 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 16:02:23.799586 1527131 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 16:02:23.813871 1527131 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 16:02:23.820586 1527131 out.go:252]   - Generating certificates and keys ...
	I1213 16:02:23.820722 1527131 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 16:02:23.820831 1527131 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 16:02:24.062915 1527131 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 16:02:24.119432 1527131 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 16:02:24.837877 1527131 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 16:02:25.323783 1527131 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 16:02:25.382177 1527131 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 16:02:25.382477 1527131 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-526531] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 16:02:25.533405 1527131 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 16:02:25.533842 1527131 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-526531] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 16:02:25.796805 1527131 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 16:02:25.975896 1527131 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 16:02:26.105650 1527131 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 16:02:26.105962 1527131 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 16:02:26.444172 1527131 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 16:02:26.939066 1527131 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 16:02:27.121431 1527131 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 16:02:27.579446 1527131 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 16:02:27.628725 1527131 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 16:02:27.629390 1527131 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 16:02:27.631991 1527131 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 16:02:27.635735 1527131 out.go:252]   - Booting up control plane ...
	I1213 16:02:27.635847 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 16:02:27.635926 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 16:02:27.635993 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 16:02:27.657055 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 16:02:27.657166 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 16:02:27.664926 1527131 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 16:02:27.665403 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 16:02:27.665639 1527131 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 16:02:27.803169 1527131 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 16:02:27.803302 1527131 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 16:02:41.545926 1500765 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 16:02:41.546108 1500765 kubeadm.go:319] 
	I1213 16:02:41.546236 1500765 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 16:02:41.551134 1500765 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 16:02:41.551190 1500765 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 16:02:41.551289 1500765 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 16:02:41.551373 1500765 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 16:02:41.551414 1500765 kubeadm.go:319] OS: Linux
	I1213 16:02:41.551459 1500765 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 16:02:41.551511 1500765 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 16:02:41.551561 1500765 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 16:02:41.551612 1500765 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 16:02:41.551663 1500765 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 16:02:41.551715 1500765 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 16:02:41.551764 1500765 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 16:02:41.551816 1500765 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 16:02:41.551866 1500765 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 16:02:41.551941 1500765 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 16:02:41.552042 1500765 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 16:02:41.552133 1500765 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 16:02:41.552199 1500765 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 16:02:41.555522 1500765 out.go:252]   - Generating certificates and keys ...
	I1213 16:02:41.555641 1500765 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 16:02:41.555717 1500765 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 16:02:41.555797 1500765 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 16:02:41.555873 1500765 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 16:02:41.555970 1500765 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 16:02:41.556031 1500765 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 16:02:41.556110 1500765 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 16:02:41.556213 1500765 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 16:02:41.556310 1500765 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 16:02:41.556431 1500765 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 16:02:41.556486 1500765 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 16:02:41.556559 1500765 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 16:02:41.556617 1500765 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 16:02:41.556678 1500765 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 16:02:41.556736 1500765 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 16:02:41.556817 1500765 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 16:02:41.556888 1500765 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 16:02:41.556980 1500765 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 16:02:41.557075 1500765 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 16:02:41.560042 1500765 out.go:252]   - Booting up control plane ...
	I1213 16:02:41.560143 1500765 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 16:02:41.560258 1500765 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 16:02:41.560348 1500765 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 16:02:41.560479 1500765 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 16:02:41.560588 1500765 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 16:02:41.560701 1500765 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 16:02:41.560824 1500765 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 16:02:41.560880 1500765 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 16:02:41.561017 1500765 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 16:02:41.561131 1500765 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 16:02:41.561233 1500765 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000293839s
	I1213 16:02:41.561265 1500765 kubeadm.go:319] 
	I1213 16:02:41.561329 1500765 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 16:02:41.561367 1500765 kubeadm.go:319] 	- The kubelet is not running
	I1213 16:02:41.561492 1500765 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 16:02:41.561506 1500765 kubeadm.go:319] 
	I1213 16:02:41.561630 1500765 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 16:02:41.561673 1500765 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 16:02:41.561708 1500765 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 16:02:41.561776 1500765 kubeadm.go:319] 
	I1213 16:02:41.561777 1500765 kubeadm.go:403] duration metric: took 8m8.131517099s to StartCluster
	I1213 16:02:41.561824 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:02:41.561903 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:02:41.594564 1500765 cri.go:89] found id: ""
	I1213 16:02:41.594594 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.594603 1500765 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:02:41.594609 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:02:41.594677 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:02:41.629231 1500765 cri.go:89] found id: ""
	I1213 16:02:41.629252 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.629260 1500765 logs.go:284] No container was found matching "etcd"
	I1213 16:02:41.629266 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:02:41.629322 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:02:41.656157 1500765 cri.go:89] found id: ""
	I1213 16:02:41.656181 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.656190 1500765 logs.go:284] No container was found matching "coredns"
	I1213 16:02:41.656196 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:02:41.656276 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:02:41.681173 1500765 cri.go:89] found id: ""
	I1213 16:02:41.681208 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.681217 1500765 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:02:41.681224 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:02:41.681308 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:02:41.708543 1500765 cri.go:89] found id: ""
	I1213 16:02:41.708568 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.708577 1500765 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:02:41.708583 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:02:41.708660 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:02:41.737039 1500765 cri.go:89] found id: ""
	I1213 16:02:41.737062 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.737071 1500765 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:02:41.737079 1500765 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:02:41.737137 1500765 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:02:41.762249 1500765 cri.go:89] found id: ""
	I1213 16:02:41.762275 1500765 logs.go:282] 0 containers: []
	W1213 16:02:41.762283 1500765 logs.go:284] No container was found matching "kindnet"
	I1213 16:02:41.762294 1500765 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:02:41.762306 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:02:41.828774 1500765 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:02:41.820905    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.821458    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.823177    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.823750    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.825347    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:02:41.820905    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.821458    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.823177    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.823750    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:02:41.825347    5442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:02:41.828797 1500765 logs.go:123] Gathering logs for containerd ...
	I1213 16:02:41.828810 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:02:41.870479 1500765 logs.go:123] Gathering logs for container status ...
	I1213 16:02:41.870512 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:02:41.897347 1500765 logs.go:123] Gathering logs for kubelet ...
	I1213 16:02:41.897374 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:02:41.954515 1500765 logs.go:123] Gathering logs for dmesg ...
	I1213 16:02:41.954549 1500765 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 16:02:41.971648 1500765 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000293839s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 16:02:41.971703 1500765 out.go:285] * 
	W1213 16:02:41.971970 1500765 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000293839s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 16:02:41.971990 1500765 out.go:285] * 
	W1213 16:02:41.974206 1500765 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 16:02:41.979727 1500765 out.go:203] 
	W1213 16:02:41.982586 1500765 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000293839s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 16:02:41.982624 1500765 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 16:02:41.982645 1500765 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 16:02:41.985873 1500765 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 15:54:23 no-preload-439544 containerd[760]: time="2025-12-13T15:54:23.378148111Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:24 no-preload-439544 containerd[760]: time="2025-12-13T15:54:24.600685306Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 13 15:54:24 no-preload-439544 containerd[760]: time="2025-12-13T15:54:24.603732906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 13 15:54:24 no-preload-439544 containerd[760]: time="2025-12-13T15:54:24.611915029Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:24 no-preload-439544 containerd[760]: time="2025-12-13T15:54:24.613116551Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:25 no-preload-439544 containerd[760]: time="2025-12-13T15:54:25.503138610Z" level=info msg="No images store for sha256:84ea4651cf4d4486006d1346129c6964687be99508987d0ca606406fbc15a298"
	Dec 13 15:54:25 no-preload-439544 containerd[760]: time="2025-12-13T15:54:25.506879683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\""
	Dec 13 15:54:25 no-preload-439544 containerd[760]: time="2025-12-13T15:54:25.528281020Z" level=info msg="ImageCreate event name:\"sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:25 no-preload-439544 containerd[760]: time="2025-12-13T15:54:25.529509930Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:27 no-preload-439544 containerd[760]: time="2025-12-13T15:54:27.056611379Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 13 15:54:27 no-preload-439544 containerd[760]: time="2025-12-13T15:54:27.059970700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 13 15:54:27 no-preload-439544 containerd[760]: time="2025-12-13T15:54:27.072962113Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:27 no-preload-439544 containerd[760]: time="2025-12-13T15:54:27.074433027Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:28 no-preload-439544 containerd[760]: time="2025-12-13T15:54:28.221784082Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 13 15:54:28 no-preload-439544 containerd[760]: time="2025-12-13T15:54:28.224970821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 13 15:54:28 no-preload-439544 containerd[760]: time="2025-12-13T15:54:28.232633350Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:28 no-preload-439544 containerd[760]: time="2025-12-13T15:54:28.233266000Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.393544387Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.395762984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.407681609Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.408407697Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.791409724Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.793787530Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.800749932Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 15:54:29 no-preload-439544 containerd[760]: time="2025-12-13T15:54:29.801079615Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:04:40.301682    6937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:04:40.302301    6937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:04:40.303819    6937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:04:40.304387    6937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:04:40.306101    6937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.759368] overlayfs: idmapped layers are currently not supported
	[Dec13 13:37] overlayfs: idmapped layers are currently not supported
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 16:04:40 up  7:47,  0 user,  load average: 0.27, 1.21, 1.64
	Linux no-preload-439544 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 16:04:37 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:04:37 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 474.
	Dec 13 16:04:37 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:04:37 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:04:37 no-preload-439544 kubelet[6820]: E1213 16:04:37.898023    6820 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:04:37 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:04:37 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:04:38 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 475.
	Dec 13 16:04:38 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:04:38 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:04:38 no-preload-439544 kubelet[6826]: E1213 16:04:38.660242    6826 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:04:38 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:04:38 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:04:39 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 476.
	Dec 13 16:04:39 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:04:39 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:04:39 no-preload-439544 kubelet[6836]: E1213 16:04:39.439239    6836 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:04:39 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:04:39 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:04:40 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 477.
	Dec 13 16:04:40 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:04:40 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:04:40 no-preload-439544 kubelet[6909]: E1213 16:04:40.151207    6909 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:04:40 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:04:40 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-439544 -n no-preload-439544
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-439544 -n no-preload-439544: exit status 6 (322.117827ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 16:04:40.730467 1532336 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-439544" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-439544" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (114.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (369.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-439544 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1213 16:04:53.532453 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/old-k8s-version-912710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:05:18.170818 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:05:21.235520 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/old-k8s-version-912710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:05:38.304571 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/default-k8s-diff-port-946932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:05:38.311087 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/default-k8s-diff-port-946932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:05:38.322727 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/default-k8s-diff-port-946932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:05:38.344149 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/default-k8s-diff-port-946932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:05:38.385599 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/default-k8s-diff-port-946932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:05:38.467721 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/default-k8s-diff-port-946932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:05:38.629200 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/default-k8s-diff-port-946932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:05:38.950873 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/default-k8s-diff-port-946932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:05:39.593031 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/default-k8s-diff-port-946932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:05:40.874319 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/default-k8s-diff-port-946932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:05:43.435996 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/default-k8s-diff-port-946932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:05:48.557912 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/default-k8s-diff-port-946932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:05:58.800164 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/default-k8s-diff-port-946932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:06:19.281611 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/default-k8s-diff-port-946932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:07:00.248909 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/default-k8s-diff-port-946932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:08:21.253138 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:08:22.172490 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/default-k8s-diff-port-946932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:08:23.671447 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:08:42.552412 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:09:53.531938 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/old-k8s-version-912710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:10:18.171111 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-439544 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 80 (6m8.024532696s)

                                                
                                                
-- stdout --
	* [no-preload-439544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "no-preload-439544" primary control-plane node in "no-preload-439544" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 16:04:42.413194 1532633 out.go:360] Setting OutFile to fd 1 ...
	I1213 16:04:42.413307 1532633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:04:42.413317 1532633 out.go:374] Setting ErrFile to fd 2...
	I1213 16:04:42.413323 1532633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:04:42.413567 1532633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 16:04:42.413904 1532633 out.go:368] Setting JSON to false
	I1213 16:04:42.414786 1532633 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":28031,"bootTime":1765613851,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 16:04:42.414858 1532633 start.go:143] virtualization:  
	I1213 16:04:42.417845 1532633 out.go:179] * [no-preload-439544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 16:04:42.421555 1532633 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 16:04:42.421640 1532633 notify.go:221] Checking for updates...
	I1213 16:04:42.427687 1532633 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 16:04:42.430499 1532633 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:04:42.433392 1532633 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 16:04:42.436121 1532633 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 16:04:42.439040 1532633 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 16:04:42.442494 1532633 config.go:182] Loaded profile config "no-preload-439544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:04:42.443099 1532633 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 16:04:42.466960 1532633 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 16:04:42.467080 1532633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:04:42.529333 1532633 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:04:42.520259632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:04:42.529443 1532633 docker.go:319] overlay module found
	I1213 16:04:42.532652 1532633 out.go:179] * Using the docker driver based on existing profile
	I1213 16:04:42.535539 1532633 start.go:309] selected driver: docker
	I1213 16:04:42.535559 1532633 start.go:927] validating driver "docker" against &{Name:no-preload-439544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:04:42.535665 1532633 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 16:04:42.536328 1532633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:04:42.590849 1532633 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:04:42.581095747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:04:42.591180 1532633 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 16:04:42.591218 1532633 cni.go:84] Creating CNI manager for ""
	I1213 16:04:42.591273 1532633 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:04:42.591342 1532633 start.go:353] cluster config:
	{Name:no-preload-439544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:04:42.596381 1532633 out.go:179] * Starting "no-preload-439544" primary control-plane node in "no-preload-439544" cluster
	I1213 16:04:42.599266 1532633 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 16:04:42.602152 1532633 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 16:04:42.604937 1532633 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:04:42.605025 1532633 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 16:04:42.605107 1532633 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/config.json ...
	I1213 16:04:42.605412 1532633 cache.go:107] acquiring lock: {Name:mk6458bc7297def26ffc87aa852ed603976a017c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605492 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1213 16:04:42.605501 1532633 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.253µs
	I1213 16:04:42.605513 1532633 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1213 16:04:42.605528 1532633 cache.go:107] acquiring lock: {Name:mk04216f72d0f7cd3d2308def830acac11c8b85d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605561 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1213 16:04:42.605566 1532633 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 43.305µs
	I1213 16:04:42.605573 1532633 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1213 16:04:42.605582 1532633 cache.go:107] acquiring lock: {Name:mk2054b1540f1c54f9b25f5f78ec681c8220cfcf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605608 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1213 16:04:42.605613 1532633 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 31.647µs
	I1213 16:04:42.605619 1532633 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1213 16:04:42.605629 1532633 cache.go:107] acquiring lock: {Name:mke9c9289e43b08c6e721f866225f618ba3afddf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605654 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1213 16:04:42.605660 1532633 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 31.704µs
	I1213 16:04:42.605665 1532633 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1213 16:04:42.605674 1532633 cache.go:107] acquiring lock: {Name:mkd9f47dfe476ebd2c352fdee514a99c9fba7295 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605698 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1213 16:04:42.605703 1532633 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.621µs
	I1213 16:04:42.605709 1532633 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1213 16:04:42.605719 1532633 cache.go:107] acquiring lock: {Name:mkecf0483a10d405cf273c97b7180611bb889c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605749 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1213 16:04:42.605754 1532633 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 35.872µs
	I1213 16:04:42.605759 1532633 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1213 16:04:42.605768 1532633 cache.go:107] acquiring lock: {Name:mkb08190a177fa29b2e45167b12d4742acf808cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605793 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1213 16:04:42.605798 1532633 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 31.294µs
	I1213 16:04:42.605804 1532633 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1213 16:04:42.605812 1532633 cache.go:107] acquiring lock: {Name:mk18c875751b02ce01ad21e18c1d2a3a9ed5d930 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605845 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1213 16:04:42.605849 1532633 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 38.415µs
	I1213 16:04:42.605855 1532633 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1213 16:04:42.605861 1532633 cache.go:87] Successfully saved all images to host disk.
	I1213 16:04:42.624275 1532633 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 16:04:42.624299 1532633 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 16:04:42.624322 1532633 cache.go:243] Successfully downloaded all kic artifacts
	I1213 16:04:42.624352 1532633 start.go:360] acquireMachinesLock for no-preload-439544: {Name:mk6eb67fc85c056d1917e38b306c3e4e0ae30393 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.624426 1532633 start.go:364] duration metric: took 45.578µs to acquireMachinesLock for "no-preload-439544"
	I1213 16:04:42.624452 1532633 start.go:96] Skipping create...Using existing machine configuration
	I1213 16:04:42.624458 1532633 fix.go:54] fixHost starting: 
	I1213 16:04:42.624729 1532633 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 16:04:42.641391 1532633 fix.go:112] recreateIfNeeded on no-preload-439544: state=Stopped err=<nil>
	W1213 16:04:42.641430 1532633 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 16:04:42.644748 1532633 out.go:252] * Restarting existing docker container for "no-preload-439544" ...
	I1213 16:04:42.644834 1532633 cli_runner.go:164] Run: docker start no-preload-439544
	I1213 16:04:42.892931 1532633 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 16:04:42.919215 1532633 kic.go:430] container "no-preload-439544" state is running.
	I1213 16:04:42.919778 1532633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-439544
	I1213 16:04:42.944557 1532633 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/config.json ...
	I1213 16:04:42.944781 1532633 machine.go:94] provisionDockerMachine start ...
	I1213 16:04:42.944844 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:42.967340 1532633 main.go:143] libmachine: Using SSH client type: native
	I1213 16:04:42.967676 1532633 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34228 <nil> <nil>}
	I1213 16:04:42.967688 1532633 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 16:04:42.968381 1532633 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46966->127.0.0.1:34228: read: connection reset by peer
	I1213 16:04:46.127864 1532633 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-439544
	
	I1213 16:04:46.127889 1532633 ubuntu.go:182] provisioning hostname "no-preload-439544"
	I1213 16:04:46.127971 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:46.150540 1532633 main.go:143] libmachine: Using SSH client type: native
	I1213 16:04:46.150873 1532633 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34228 <nil> <nil>}
	I1213 16:04:46.150890 1532633 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-439544 && echo "no-preload-439544" | sudo tee /etc/hostname
	I1213 16:04:46.316630 1532633 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-439544
	
	I1213 16:04:46.316724 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:46.334085 1532633 main.go:143] libmachine: Using SSH client type: native
	I1213 16:04:46.334398 1532633 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34228 <nil> <nil>}
	I1213 16:04:46.334425 1532633 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-439544' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-439544/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-439544' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 16:04:46.483606 1532633 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 16:04:46.483691 1532633 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 16:04:46.483736 1532633 ubuntu.go:190] setting up certificates
	I1213 16:04:46.483755 1532633 provision.go:84] configureAuth start
	I1213 16:04:46.483823 1532633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-439544
	I1213 16:04:46.500162 1532633 provision.go:143] copyHostCerts
	I1213 16:04:46.500243 1532633 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 16:04:46.500259 1532633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 16:04:46.500337 1532633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 16:04:46.500448 1532633 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 16:04:46.500465 1532633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 16:04:46.500494 1532633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 16:04:46.500550 1532633 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 16:04:46.500561 1532633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 16:04:46.500585 1532633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 16:04:46.500639 1532633 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.no-preload-439544 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-439544]
	I1213 16:04:46.571887 1532633 provision.go:177] copyRemoteCerts
	I1213 16:04:46.571964 1532633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 16:04:46.572031 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:46.590720 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:46.699229 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 16:04:46.717692 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 16:04:46.736074 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 16:04:46.754498 1532633 provision.go:87] duration metric: took 270.718838ms to configureAuth
	I1213 16:04:46.754524 1532633 ubuntu.go:206] setting minikube options for container-runtime
	I1213 16:04:46.754723 1532633 config.go:182] Loaded profile config "no-preload-439544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:04:46.754730 1532633 machine.go:97] duration metric: took 3.809941558s to provisionDockerMachine
	I1213 16:04:46.754738 1532633 start.go:293] postStartSetup for "no-preload-439544" (driver="docker")
	I1213 16:04:46.754749 1532633 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 16:04:46.754799 1532633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 16:04:46.754840 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:46.773059 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:46.881154 1532633 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 16:04:46.885885 1532633 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 16:04:46.885916 1532633 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 16:04:46.885927 1532633 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 16:04:46.885987 1532633 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 16:04:46.886081 1532633 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 16:04:46.886202 1532633 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 16:04:46.895826 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:04:46.914821 1532633 start.go:296] duration metric: took 160.067146ms for postStartSetup
	I1213 16:04:46.914943 1532633 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 16:04:46.915004 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:46.933638 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:47.036731 1532633 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 16:04:47.041916 1532633 fix.go:56] duration metric: took 4.417449466s for fixHost
	I1213 16:04:47.041955 1532633 start.go:83] releasing machines lock for "no-preload-439544", held for 4.417501354s
	I1213 16:04:47.042027 1532633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-439544
	I1213 16:04:47.059436 1532633 ssh_runner.go:195] Run: cat /version.json
	I1213 16:04:47.059506 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:47.059506 1532633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 16:04:47.059564 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:47.084535 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:47.085394 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:47.187879 1532633 ssh_runner.go:195] Run: systemctl --version
	I1213 16:04:47.277224 1532633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 16:04:47.281744 1532633 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 16:04:47.281868 1532633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 16:04:47.289697 1532633 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 16:04:47.289723 1532633 start.go:496] detecting cgroup driver to use...
	I1213 16:04:47.289772 1532633 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 16:04:47.289839 1532633 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 16:04:47.306480 1532633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 16:04:47.320548 1532633 docker.go:218] disabling cri-docker service (if available) ...
	I1213 16:04:47.320616 1532633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 16:04:47.336688 1532633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 16:04:47.350304 1532633 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 16:04:47.479878 1532633 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 16:04:47.617602 1532633 docker.go:234] disabling docker service ...
	I1213 16:04:47.617669 1532633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 16:04:47.636022 1532633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 16:04:47.651078 1532633 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 16:04:47.763618 1532633 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 16:04:47.889857 1532633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 16:04:47.903250 1532633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 16:04:47.917785 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 16:04:47.928047 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 16:04:47.937137 1532633 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 16:04:47.937223 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 16:04:47.946706 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:04:47.956145 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 16:04:47.964976 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:04:47.973942 1532633 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 16:04:47.982426 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 16:04:47.991145 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 16:04:48.000472 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 16:04:48.013270 1532633 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 16:04:48.021912 1532633 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 16:04:48.030401 1532633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:04:48.154042 1532633 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 16:04:48.258872 1532633 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 16:04:48.258948 1532633 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 16:04:48.262883 1532633 start.go:564] Will wait 60s for crictl version
	I1213 16:04:48.262950 1532633 ssh_runner.go:195] Run: which crictl
	I1213 16:04:48.266721 1532633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 16:04:48.292243 1532633 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 16:04:48.292316 1532633 ssh_runner.go:195] Run: containerd --version
	I1213 16:04:48.313344 1532633 ssh_runner.go:195] Run: containerd --version
	I1213 16:04:48.341964 1532633 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 16:04:48.344943 1532633 cli_runner.go:164] Run: docker network inspect no-preload-439544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 16:04:48.371046 1532633 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 16:04:48.375277 1532633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:04:48.399899 1532633 kubeadm.go:884] updating cluster {Name:no-preload-439544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 16:04:48.400017 1532633 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:04:48.400067 1532633 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:04:48.428371 1532633 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:04:48.428396 1532633 cache_images.go:86] Images are preloaded, skipping loading
	I1213 16:04:48.428408 1532633 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 16:04:48.428505 1532633 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-439544 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 16:04:48.428573 1532633 ssh_runner.go:195] Run: sudo crictl info
	I1213 16:04:48.457647 1532633 cni.go:84] Creating CNI manager for ""
	I1213 16:04:48.457673 1532633 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:04:48.457695 1532633 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 16:04:48.457722 1532633 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-439544 NodeName:no-preload-439544 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 16:04:48.457839 1532633 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-439544"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 16:04:48.457908 1532633 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 16:04:48.465484 1532633 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 16:04:48.465565 1532633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 16:04:48.473169 1532633 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 16:04:48.486257 1532633 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 16:04:48.498821 1532633 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 16:04:48.514097 1532633 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 16:04:48.518017 1532633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:04:48.528671 1532633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:04:48.641355 1532633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:04:48.658852 1532633 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544 for IP: 192.168.85.2
	I1213 16:04:48.658874 1532633 certs.go:195] generating shared ca certs ...
	I1213 16:04:48.658891 1532633 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:04:48.659056 1532633 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 16:04:48.659112 1532633 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 16:04:48.659125 1532633 certs.go:257] generating profile certs ...
	I1213 16:04:48.659257 1532633 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/client.key
	I1213 16:04:48.659352 1532633 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/apiserver.key.75137389
	I1213 16:04:48.659412 1532633 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/proxy-client.key
	I1213 16:04:48.659543 1532633 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 16:04:48.659584 1532633 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 16:04:48.659597 1532633 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 16:04:48.659638 1532633 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 16:04:48.659667 1532633 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 16:04:48.659704 1532633 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 16:04:48.659762 1532633 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:04:48.660460 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 16:04:48.678510 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 16:04:48.696835 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 16:04:48.715192 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 16:04:48.736544 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 16:04:48.754814 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 16:04:48.773396 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 16:04:48.791284 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 16:04:48.809761 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 16:04:48.827867 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 16:04:48.845597 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 16:04:48.862990 1532633 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 16:04:48.875844 1532633 ssh_runner.go:195] Run: openssl version
	I1213 16:04:48.882335 1532633 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 16:04:48.889759 1532633 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 16:04:48.897307 1532633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 16:04:48.901108 1532633 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 16:04:48.901221 1532633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 16:04:48.942179 1532633 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 16:04:48.949998 1532633 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 16:04:48.957450 1532633 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 16:04:48.965192 1532633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 16:04:48.969267 1532633 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 16:04:48.969332 1532633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 16:04:49.010426 1532633 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 16:04:49.019213 1532633 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:04:49.026990 1532633 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 16:04:49.034610 1532633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:04:49.038616 1532633 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:04:49.038700 1532633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:04:49.079625 1532633 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 16:04:49.092345 1532633 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 16:04:49.097174 1532633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 16:04:49.138992 1532633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 16:04:49.179959 1532633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 16:04:49.220981 1532633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 16:04:49.263836 1532633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 16:04:49.305100 1532633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 16:04:49.346214 1532633 kubeadm.go:401] StartCluster: {Name:no-preload-439544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:04:49.346315 1532633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 16:04:49.346388 1532633 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 16:04:49.374870 1532633 cri.go:89] found id: ""
	I1213 16:04:49.374958 1532633 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 16:04:49.382718 1532633 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 16:04:49.382749 1532633 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 16:04:49.382843 1532633 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 16:04:49.392071 1532633 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 16:04:49.392512 1532633 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-439544" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:04:49.392626 1532633 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-1251074/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-439544" cluster setting kubeconfig missing "no-preload-439544" context setting]
	I1213 16:04:49.392945 1532633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:04:49.395692 1532633 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 16:04:49.403908 1532633 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 16:04:49.403991 1532633 kubeadm.go:602] duration metric: took 21.234385ms to restartPrimaryControlPlane
	I1213 16:04:49.404014 1532633 kubeadm.go:403] duration metric: took 57.808126ms to StartCluster
	I1213 16:04:49.404029 1532633 settings.go:142] acquiring lock: {Name:mk3e38eb9635dd950ddf8081cc0598a8c10049c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:04:49.404097 1532633 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:04:49.404746 1532633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:04:49.404991 1532633 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 16:04:49.405373 1532633 config.go:182] Loaded profile config "no-preload-439544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:04:49.405453 1532633 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 16:04:49.405529 1532633 addons.go:70] Setting storage-provisioner=true in profile "no-preload-439544"
	I1213 16:04:49.405551 1532633 addons.go:239] Setting addon storage-provisioner=true in "no-preload-439544"
	I1213 16:04:49.405574 1532633 host.go:66] Checking if "no-preload-439544" exists ...
	I1213 16:04:49.405617 1532633 addons.go:70] Setting dashboard=true in profile "no-preload-439544"
	I1213 16:04:49.405653 1532633 addons.go:239] Setting addon dashboard=true in "no-preload-439544"
	W1213 16:04:49.405672 1532633 addons.go:248] addon dashboard should already be in state true
	I1213 16:04:49.405720 1532633 host.go:66] Checking if "no-preload-439544" exists ...
	I1213 16:04:49.406068 1532633 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 16:04:49.406504 1532633 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 16:04:49.406575 1532633 addons.go:70] Setting default-storageclass=true in profile "no-preload-439544"
	I1213 16:04:49.406600 1532633 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-439544"
	I1213 16:04:49.406887 1532633 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 16:04:49.410533 1532633 out.go:179] * Verifying Kubernetes components...
	I1213 16:04:49.413615 1532633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:04:49.447417 1532633 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 16:04:49.451069 1532633 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:04:49.451101 1532633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 16:04:49.451201 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:49.463790 1532633 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 16:04:49.466503 1532633 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 16:04:49.473300 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 16:04:49.473383 1532633 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 16:04:49.473493 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:49.479179 1532633 addons.go:239] Setting addon default-storageclass=true in "no-preload-439544"
	I1213 16:04:49.479230 1532633 host.go:66] Checking if "no-preload-439544" exists ...
	I1213 16:04:49.479734 1532633 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 16:04:49.522588 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:49.545446 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:49.555551 1532633 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 16:04:49.555579 1532633 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 16:04:49.555649 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:49.583737 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:49.672869 1532633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:04:49.702326 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:04:49.726116 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 16:04:49.726144 1532633 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 16:04:49.731991 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 16:04:49.746280 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 16:04:49.746304 1532633 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 16:04:49.759419 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 16:04:49.759445 1532633 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 16:04:49.773846 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 16:04:49.773922 1532633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 16:04:49.788446 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 16:04:49.788520 1532633 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 16:04:49.801996 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 16:04:49.802073 1532633 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 16:04:49.815387 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 16:04:49.815464 1532633 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 16:04:49.828609 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 16:04:49.828684 1532633 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 16:04:49.862172 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:04:49.862245 1532633 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 16:04:49.898115 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:04:50.335585 1532633 node_ready.go:35] waiting up to 6m0s for node "no-preload-439544" to be "Ready" ...
	W1213 16:04:50.335668 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.335706 1532633 retry.go:31] will retry after 254.843686ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:50.335826 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.335840 1532633 retry.go:31] will retry after 189.333653ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:50.336064 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.336084 1532633 retry.go:31] will retry after 239.72839ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.525319 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 16:04:50.576944 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:04:50.591356 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:50.603642 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.603688 1532633 retry.go:31] will retry after 288.501165ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:50.701103 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.701138 1532633 retry.go:31] will retry after 467.260982ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:50.701217 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.701231 1532633 retry.go:31] will retry after 509.7977ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.893390 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:04:50.954719 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.954753 1532633 retry.go:31] will retry after 738.142646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.169190 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:04:51.211722 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:51.245032 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.245067 1532633 retry.go:31] will retry after 783.746721ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:51.279035 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.279081 1532633 retry.go:31] will retry after 291.424758ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.570765 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:51.626988 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.627029 1532633 retry.go:31] will retry after 1.041042015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.693422 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:04:51.750389 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.750422 1532633 retry.go:31] will retry after 685.062417ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:52.029491 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:04:52.108797 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:52.108902 1532633 retry.go:31] will retry after 939.299233ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:52.336815 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:04:52.436241 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:04:52.496715 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:52.496747 1532633 retry.go:31] will retry after 1.433097098s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:52.669004 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:52.730009 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:52.730041 1532633 retry.go:31] will retry after 640.138294ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:53.049072 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:04:53.112314 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:53.112422 1532633 retry.go:31] will retry after 1.734157912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:53.371175 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:53.437917 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:53.437956 1532633 retry.go:31] will retry after 2.49121489s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:53.930071 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:04:53.986900 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:53.986935 1532633 retry.go:31] will retry after 2.048688298s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:54.336885 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:04:54.847106 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:04:54.923019 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:54.923054 1532633 retry.go:31] will retry after 2.142030138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:55.930227 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:55.990258 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:55.990294 1532633 retry.go:31] will retry after 2.707811037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:56.036521 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:04:56.097317 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:56.097352 1532633 retry.go:31] will retry after 2.146665141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:56.836913 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:04:57.065333 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:04:57.147079 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:57.147117 1532633 retry.go:31] will retry after 3.792914481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:58.244261 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:04:58.304505 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:58.304538 1532633 retry.go:31] will retry after 3.360821909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:58.698362 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:58.754622 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:58.754653 1532633 retry.go:31] will retry after 5.541004931s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:59.336144 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:00.940480 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:05:01.003756 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:01.003802 1532633 retry.go:31] will retry after 2.96874462s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:01.336264 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:01.665917 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:05:01.728242 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:01.728275 1532633 retry.go:31] will retry after 8.916729655s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:03.336811 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:03.973522 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:05:04.037741 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:04.037776 1532633 retry.go:31] will retry after 6.210277542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:04.296383 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:05:04.360008 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:04.360045 1532633 retry.go:31] will retry after 7.195036005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:05.337054 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:07.836826 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:09.837041 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:10.248588 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:05:10.313237 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:10.313283 1532633 retry.go:31] will retry after 8.934777878s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:10.646200 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:05:10.705656 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:10.705690 1532633 retry.go:31] will retry after 12.190283501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:11.555705 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:05:11.661890 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:11.661924 1532633 retry.go:31] will retry after 5.300472002s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:12.336810 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:14.336968 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:16.337075 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:16.963159 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:05:17.023434 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:17.023464 1532633 retry.go:31] will retry after 7.246070268s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:18.836178 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:19.248832 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:05:19.312969 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:19.313003 1532633 retry.go:31] will retry after 13.568837967s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:20.836857 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:22.896385 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:05:22.954841 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:22.954869 1532633 retry.go:31] will retry after 19.284270803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:23.336898 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:24.270582 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:05:24.330461 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:24.330496 1532633 retry.go:31] will retry after 25.107997507s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:25.836832 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:27.837099 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:29.837229 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:32.337006 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:32.882520 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:05:32.944328 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:32.944368 1532633 retry.go:31] will retry after 16.148859129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:34.836937 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:37.337064 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:39.837056 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:42.239525 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:05:42.310135 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:42.310173 1532633 retry.go:31] will retry after 15.456030755s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:42.336738 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:44.336942 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:46.337118 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:48.836877 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:49.094336 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:05:49.194140 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:49.194179 1532633 retry.go:31] will retry after 37.565219756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:49.439413 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:05:49.497701 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:49.497737 1532633 retry.go:31] will retry after 28.907874152s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:51.336848 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:53.836235 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:55.836809 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:57.766432 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:05:57.827035 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:57.827069 1532633 retry.go:31] will retry after 21.817184299s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:58.336352 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:00.336702 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:02.337038 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:04.836820 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:06.836996 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:08.837192 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:11.337013 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:13.836907 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:16.336156 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:18.336864 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:06:18.406172 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:06:18.467162 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:06:18.467195 1532633 retry.go:31] will retry after 30.701956357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:06:19.645168 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:06:19.709360 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:06:19.709466 1532633 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 16:06:20.336963 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:22.337091 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:24.836902 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:06:26.760577 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:06:26.824828 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:06:26.824933 1532633 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 16:06:27.336805 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:29.836878 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:32.336819 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:34.336911 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:36.836814 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:38.837068 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:41.336826 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:43.336942 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:45.836809 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:47.836978 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:06:49.169418 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:06:49.229366 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:06:49.229477 1532633 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 16:06:49.233284 1532633 out.go:179] * Enabled addons: 
	I1213 16:06:49.236115 1532633 addons.go:530] duration metric: took 1m59.83066349s for enable addons: enabled=[]
	W1213 16:06:50.336853 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:52.836975 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:55.336982 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:57.836819 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:59.837077 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:02.336884 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:04.337044 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:06.836907 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:09.336829 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:11.836902 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:13.836966 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:16.336991 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:18.836964 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:21.336861 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:23.336994 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:25.337136 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:27.837080 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:30.336834 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:32.336947 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:34.337009 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:36.836927 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:39.336872 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:41.836269 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:43.836773 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:45.837030 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:47.837167 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:50.336908 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:52.336995 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:54.836850 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:56.837113 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:59.336907 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:01.836519 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:03.836935 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:05.837188 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:08.336182 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:10.336290 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:12.836188 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:14.837007 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:17.336926 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:19.337137 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:21.836823 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:23.836887 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:26.336902 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:28.836875 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:30.837155 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:33.344927 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:35.836197 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:38.336221 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:40.336266 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:42.336937 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:44.837052 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:47.336949 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:49.337721 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:51.836216 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:54.336802 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:56.337015 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:58.337101 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:00.340034 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:02.837190 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:05.337007 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:07.836179 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:09.836379 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:12.336811 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:14.337024 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:16.836875 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:18.836958 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:21.336809 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:23.337044 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:25.337144 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:27.837183 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:30.336838 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:32.336966 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:34.836253 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:36.837105 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:39.336929 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:41.836911 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:44.336936 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:46.336992 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:48.837015 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:51.336072 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:53.336374 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:55.836834 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:57.837117 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:59.837157 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:02.336184 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:04.336871 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:06.336975 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:08.836835 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:10.836923 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:12.837238 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:15.336203 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:17.337025 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:19.837094 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:22.336928 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:24.836175 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:26.836947 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:28.837006 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:31.336932 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:33.836168 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:35.836828 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:37.837072 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:39.837209 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:42.337374 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:44.836865 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:47.336935 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:49.836264 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:10:50.335908 1532633 node_ready.go:38] duration metric: took 6m0.000276074s for node "no-preload-439544" to be "Ready" ...
	I1213 16:10:50.339158 1532633 out.go:203] 
	W1213 16:10:50.342306 1532633 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 16:10:50.342341 1532633 out.go:285] * 
	* 
	W1213 16:10:50.344947 1532633 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 16:10:50.347878 1532633 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p no-preload-439544 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-439544
helpers_test.go:244: (dbg) docker inspect no-preload-439544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d",
	        "Created": "2025-12-13T15:54:12.178460013Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1532771,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T16:04:42.677982497Z",
	            "FinishedAt": "2025-12-13T16:04:41.261584549Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/hostname",
	        "HostsPath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/hosts",
	        "LogPath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d-json.log",
	        "Name": "/no-preload-439544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-439544:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-439544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d",
	                "LowerDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-439544",
	                "Source": "/var/lib/docker/volumes/no-preload-439544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-439544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-439544",
	                "name.minikube.sigs.k8s.io": "no-preload-439544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4dced35fb175add3b26a40dff982545ee75f124f4735db30543f89845b336b1c",
	            "SandboxKey": "/var/run/docker/netns/4dced35fb175",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34228"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34229"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34232"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34230"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34231"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-439544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:74:3b:fa:0b:79",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "77a202cfc0163f00e436e53caf7341626192055eeb5da6a6f5d953ced7f7adfb",
	                    "EndpointID": "7084aedd50f3a2db715b196cf320f0078e1627ae582576065d327fcc3de1e2ca",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-439544",
	                        "53d57adb0653"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-439544 -n no-preload-439544
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-439544 -n no-preload-439544: exit status 2 (320.491889ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-439544 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ stop    │ -p embed-certs-270324 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:57 UTC │ 13 Dec 25 15:58 UTC │
	│ addons  │ enable dashboard -p embed-certs-270324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:58 UTC │ 13 Dec 25 15:58 UTC │
	│ start   │ -p embed-certs-270324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:58 UTC │ 13 Dec 25 15:58 UTC │
	│ image   │ embed-certs-270324 image list --format=json                                                                                                                                                                                                                │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ pause   │ -p embed-certs-270324 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ unpause │ -p embed-certs-270324 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p embed-certs-270324                                                                                                                                                                                                                                      │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p embed-certs-270324                                                                                                                                                                                                                                      │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p disable-driver-mounts-614298                                                                                                                                                                                                                            │ disable-driver-mounts-614298 │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ start   │ -p default-k8s-diff-port-946932 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 16:00 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-946932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:00 UTC │ 13 Dec 25 16:00 UTC │
	│ stop    │ -p default-k8s-diff-port-946932 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:00 UTC │ 13 Dec 25 16:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-946932 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:01 UTC │ 13 Dec 25 16:01 UTC │
	│ start   │ -p default-k8s-diff-port-946932 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:01 UTC │ 13 Dec 25 16:01 UTC │
	│ image   │ default-k8s-diff-port-946932 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ pause   │ -p default-k8s-diff-port-946932 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ unpause │ -p default-k8s-diff-port-946932 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ delete  │ -p default-k8s-diff-port-946932                                                                                                                                                                                                                            │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ delete  │ -p default-k8s-diff-port-946932                                                                                                                                                                                                                            │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ start   │ -p newest-cni-526531 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-439544 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │                     │
	│ stop    │ -p no-preload-439544 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:04 UTC │ 13 Dec 25 16:04 UTC │
	│ addons  │ enable dashboard -p no-preload-439544 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:04 UTC │ 13 Dec 25 16:04 UTC │
	│ start   │ -p no-preload-439544 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:04 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-526531 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:10 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 16:04:42
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 16:04:42.413194 1532633 out.go:360] Setting OutFile to fd 1 ...
	I1213 16:04:42.413307 1532633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:04:42.413317 1532633 out.go:374] Setting ErrFile to fd 2...
	I1213 16:04:42.413323 1532633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:04:42.413567 1532633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 16:04:42.413904 1532633 out.go:368] Setting JSON to false
	I1213 16:04:42.414786 1532633 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":28031,"bootTime":1765613851,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 16:04:42.414858 1532633 start.go:143] virtualization:  
	I1213 16:04:42.417845 1532633 out.go:179] * [no-preload-439544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 16:04:42.421555 1532633 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 16:04:42.421640 1532633 notify.go:221] Checking for updates...
	I1213 16:04:42.427687 1532633 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 16:04:42.430499 1532633 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:04:42.433392 1532633 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 16:04:42.436121 1532633 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 16:04:42.439040 1532633 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 16:04:42.442494 1532633 config.go:182] Loaded profile config "no-preload-439544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:04:42.443099 1532633 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 16:04:42.466960 1532633 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 16:04:42.467080 1532633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:04:42.529333 1532633 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:04:42.520259632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:04:42.529443 1532633 docker.go:319] overlay module found
	I1213 16:04:42.532652 1532633 out.go:179] * Using the docker driver based on existing profile
	I1213 16:04:42.535539 1532633 start.go:309] selected driver: docker
	I1213 16:04:42.535559 1532633 start.go:927] validating driver "docker" against &{Name:no-preload-439544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:04:42.535665 1532633 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 16:04:42.536328 1532633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:04:42.590849 1532633 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:04:42.581095747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:04:42.591180 1532633 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 16:04:42.591218 1532633 cni.go:84] Creating CNI manager for ""
	I1213 16:04:42.591273 1532633 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:04:42.591342 1532633 start.go:353] cluster config:
	{Name:no-preload-439544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:04:42.596381 1532633 out.go:179] * Starting "no-preload-439544" primary control-plane node in "no-preload-439544" cluster
	I1213 16:04:42.599266 1532633 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 16:04:42.602152 1532633 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 16:04:42.604937 1532633 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:04:42.605025 1532633 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 16:04:42.605107 1532633 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/config.json ...
	I1213 16:04:42.605412 1532633 cache.go:107] acquiring lock: {Name:mk6458bc7297def26ffc87aa852ed603976a017c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605492 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1213 16:04:42.605501 1532633 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.253µs
	I1213 16:04:42.605513 1532633 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1213 16:04:42.605528 1532633 cache.go:107] acquiring lock: {Name:mk04216f72d0f7cd3d2308def830acac11c8b85d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605561 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1213 16:04:42.605566 1532633 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 43.305µs
	I1213 16:04:42.605573 1532633 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1213 16:04:42.605582 1532633 cache.go:107] acquiring lock: {Name:mk2054b1540f1c54f9b25f5f78ec681c8220cfcf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605608 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1213 16:04:42.605613 1532633 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 31.647µs
	I1213 16:04:42.605619 1532633 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1213 16:04:42.605629 1532633 cache.go:107] acquiring lock: {Name:mke9c9289e43b08c6e721f866225f618ba3afddf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605654 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1213 16:04:42.605660 1532633 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 31.704µs
	I1213 16:04:42.605665 1532633 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1213 16:04:42.605674 1532633 cache.go:107] acquiring lock: {Name:mkd9f47dfe476ebd2c352fdee514a99c9fba7295 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605698 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1213 16:04:42.605703 1532633 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.621µs
	I1213 16:04:42.605709 1532633 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1213 16:04:42.605719 1532633 cache.go:107] acquiring lock: {Name:mkecf0483a10d405cf273c97b7180611bb889c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605749 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1213 16:04:42.605754 1532633 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 35.872µs
	I1213 16:04:42.605759 1532633 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1213 16:04:42.605768 1532633 cache.go:107] acquiring lock: {Name:mkb08190a177fa29b2e45167b12d4742acf808cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605793 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1213 16:04:42.605798 1532633 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 31.294µs
	I1213 16:04:42.605804 1532633 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1213 16:04:42.605812 1532633 cache.go:107] acquiring lock: {Name:mk18c875751b02ce01ad21e18c1d2a3a9ed5d930 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605845 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1213 16:04:42.605849 1532633 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 38.415µs
	I1213 16:04:42.605855 1532633 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1213 16:04:42.605861 1532633 cache.go:87] Successfully saved all images to host disk.
	I1213 16:04:42.624275 1532633 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 16:04:42.624299 1532633 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 16:04:42.624322 1532633 cache.go:243] Successfully downloaded all kic artifacts
	I1213 16:04:42.624352 1532633 start.go:360] acquireMachinesLock for no-preload-439544: {Name:mk6eb67fc85c056d1917e38b306c3e4e0ae30393 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.624426 1532633 start.go:364] duration metric: took 45.578µs to acquireMachinesLock for "no-preload-439544"
	I1213 16:04:42.624452 1532633 start.go:96] Skipping create...Using existing machine configuration
	I1213 16:04:42.624458 1532633 fix.go:54] fixHost starting: 
	I1213 16:04:42.624729 1532633 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 16:04:42.641391 1532633 fix.go:112] recreateIfNeeded on no-preload-439544: state=Stopped err=<nil>
	W1213 16:04:42.641430 1532633 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 16:04:42.644748 1532633 out.go:252] * Restarting existing docker container for "no-preload-439544" ...
	I1213 16:04:42.644834 1532633 cli_runner.go:164] Run: docker start no-preload-439544
	I1213 16:04:42.892931 1532633 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 16:04:42.919215 1532633 kic.go:430] container "no-preload-439544" state is running.
	I1213 16:04:42.919778 1532633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-439544
	I1213 16:04:42.944557 1532633 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/config.json ...
	I1213 16:04:42.944781 1532633 machine.go:94] provisionDockerMachine start ...
	I1213 16:04:42.944844 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:42.967340 1532633 main.go:143] libmachine: Using SSH client type: native
	I1213 16:04:42.967676 1532633 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34228 <nil> <nil>}
	I1213 16:04:42.967688 1532633 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 16:04:42.968381 1532633 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46966->127.0.0.1:34228: read: connection reset by peer
	I1213 16:04:46.127864 1532633 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-439544
	
	I1213 16:04:46.127889 1532633 ubuntu.go:182] provisioning hostname "no-preload-439544"
	I1213 16:04:46.127971 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:46.150540 1532633 main.go:143] libmachine: Using SSH client type: native
	I1213 16:04:46.150873 1532633 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34228 <nil> <nil>}
	I1213 16:04:46.150890 1532633 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-439544 && echo "no-preload-439544" | sudo tee /etc/hostname
	I1213 16:04:46.316630 1532633 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-439544
	
	I1213 16:04:46.316724 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:46.334085 1532633 main.go:143] libmachine: Using SSH client type: native
	I1213 16:04:46.334398 1532633 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34228 <nil> <nil>}
	I1213 16:04:46.334425 1532633 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-439544' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-439544/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-439544' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 16:04:46.483606 1532633 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 16:04:46.483691 1532633 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 16:04:46.483736 1532633 ubuntu.go:190] setting up certificates
	I1213 16:04:46.483755 1532633 provision.go:84] configureAuth start
	I1213 16:04:46.483823 1532633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-439544
	I1213 16:04:46.500162 1532633 provision.go:143] copyHostCerts
	I1213 16:04:46.500243 1532633 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 16:04:46.500259 1532633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 16:04:46.500337 1532633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 16:04:46.500448 1532633 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 16:04:46.500465 1532633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 16:04:46.500494 1532633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 16:04:46.500550 1532633 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 16:04:46.500561 1532633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 16:04:46.500585 1532633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 16:04:46.500639 1532633 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.no-preload-439544 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-439544]
	I1213 16:04:46.571887 1532633 provision.go:177] copyRemoteCerts
	I1213 16:04:46.571964 1532633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 16:04:46.572031 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:46.590720 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:46.699229 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 16:04:46.717692 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 16:04:46.736074 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 16:04:46.754498 1532633 provision.go:87] duration metric: took 270.718838ms to configureAuth
	I1213 16:04:46.754524 1532633 ubuntu.go:206] setting minikube options for container-runtime
	I1213 16:04:46.754723 1532633 config.go:182] Loaded profile config "no-preload-439544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:04:46.754730 1532633 machine.go:97] duration metric: took 3.809941558s to provisionDockerMachine
	I1213 16:04:46.754738 1532633 start.go:293] postStartSetup for "no-preload-439544" (driver="docker")
	I1213 16:04:46.754749 1532633 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 16:04:46.754799 1532633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 16:04:46.754840 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:46.773059 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:46.881154 1532633 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 16:04:46.885885 1532633 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 16:04:46.885916 1532633 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 16:04:46.885927 1532633 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 16:04:46.885987 1532633 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 16:04:46.886081 1532633 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 16:04:46.886202 1532633 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 16:04:46.895826 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:04:46.914821 1532633 start.go:296] duration metric: took 160.067146ms for postStartSetup
	I1213 16:04:46.914943 1532633 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 16:04:46.915004 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:46.933638 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:47.036731 1532633 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 16:04:47.041916 1532633 fix.go:56] duration metric: took 4.417449466s for fixHost
	I1213 16:04:47.041955 1532633 start.go:83] releasing machines lock for "no-preload-439544", held for 4.417501354s
	I1213 16:04:47.042027 1532633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-439544
	I1213 16:04:47.059436 1532633 ssh_runner.go:195] Run: cat /version.json
	I1213 16:04:47.059506 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:47.059506 1532633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 16:04:47.059564 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:47.084535 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:47.085394 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:47.187879 1532633 ssh_runner.go:195] Run: systemctl --version
	I1213 16:04:47.277224 1532633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 16:04:47.281744 1532633 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 16:04:47.281868 1532633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 16:04:47.289697 1532633 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 16:04:47.289723 1532633 start.go:496] detecting cgroup driver to use...
	I1213 16:04:47.289772 1532633 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 16:04:47.289839 1532633 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 16:04:47.306480 1532633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 16:04:47.320548 1532633 docker.go:218] disabling cri-docker service (if available) ...
	I1213 16:04:47.320616 1532633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 16:04:47.336688 1532633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 16:04:47.350304 1532633 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 16:04:47.479878 1532633 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 16:04:47.617602 1532633 docker.go:234] disabling docker service ...
	I1213 16:04:47.617669 1532633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 16:04:47.636022 1532633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 16:04:47.651078 1532633 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 16:04:47.763618 1532633 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 16:04:47.889857 1532633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 16:04:47.903250 1532633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 16:04:47.917785 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 16:04:47.928047 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 16:04:47.937137 1532633 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 16:04:47.937223 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 16:04:47.946706 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:04:47.956145 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 16:04:47.964976 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:04:47.973942 1532633 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 16:04:47.982426 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 16:04:47.991145 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 16:04:48.000472 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 16:04:48.013270 1532633 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 16:04:48.021912 1532633 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 16:04:48.030401 1532633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:04:48.154042 1532633 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 16:04:48.258872 1532633 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 16:04:48.258948 1532633 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 16:04:48.262883 1532633 start.go:564] Will wait 60s for crictl version
	I1213 16:04:48.262950 1532633 ssh_runner.go:195] Run: which crictl
	I1213 16:04:48.266721 1532633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 16:04:48.292243 1532633 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 16:04:48.292316 1532633 ssh_runner.go:195] Run: containerd --version
	I1213 16:04:48.313344 1532633 ssh_runner.go:195] Run: containerd --version
	I1213 16:04:48.341964 1532633 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 16:04:48.344943 1532633 cli_runner.go:164] Run: docker network inspect no-preload-439544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 16:04:48.371046 1532633 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 16:04:48.375277 1532633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:04:48.399899 1532633 kubeadm.go:884] updating cluster {Name:no-preload-439544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 16:04:48.400017 1532633 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:04:48.400067 1532633 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:04:48.428371 1532633 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:04:48.428396 1532633 cache_images.go:86] Images are preloaded, skipping loading
	I1213 16:04:48.428408 1532633 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 16:04:48.428505 1532633 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-439544 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 16:04:48.428573 1532633 ssh_runner.go:195] Run: sudo crictl info
	I1213 16:04:48.457647 1532633 cni.go:84] Creating CNI manager for ""
	I1213 16:04:48.457673 1532633 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:04:48.457695 1532633 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 16:04:48.457722 1532633 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-439544 NodeName:no-preload-439544 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 16:04:48.457839 1532633 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-439544"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 16:04:48.457908 1532633 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 16:04:48.465484 1532633 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 16:04:48.465565 1532633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 16:04:48.473169 1532633 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 16:04:48.486257 1532633 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 16:04:48.498821 1532633 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 16:04:48.514097 1532633 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 16:04:48.518017 1532633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:04:48.528671 1532633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:04:48.641355 1532633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:04:48.658852 1532633 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544 for IP: 192.168.85.2
	I1213 16:04:48.658874 1532633 certs.go:195] generating shared ca certs ...
	I1213 16:04:48.658891 1532633 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:04:48.659056 1532633 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 16:04:48.659112 1532633 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 16:04:48.659125 1532633 certs.go:257] generating profile certs ...
	I1213 16:04:48.659257 1532633 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/client.key
	I1213 16:04:48.659352 1532633 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/apiserver.key.75137389
	I1213 16:04:48.659412 1532633 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/proxy-client.key
	I1213 16:04:48.659543 1532633 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 16:04:48.659584 1532633 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 16:04:48.659597 1532633 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 16:04:48.659638 1532633 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 16:04:48.659667 1532633 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 16:04:48.659704 1532633 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 16:04:48.659762 1532633 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:04:48.660460 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 16:04:48.678510 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 16:04:48.696835 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 16:04:48.715192 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 16:04:48.736544 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 16:04:48.754814 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 16:04:48.773396 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 16:04:48.791284 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 16:04:48.809761 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 16:04:48.827867 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 16:04:48.845597 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 16:04:48.862990 1532633 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 16:04:48.875844 1532633 ssh_runner.go:195] Run: openssl version
	I1213 16:04:48.882335 1532633 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 16:04:48.889759 1532633 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 16:04:48.897307 1532633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 16:04:48.901108 1532633 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 16:04:48.901221 1532633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 16:04:48.942179 1532633 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 16:04:48.949998 1532633 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 16:04:48.957450 1532633 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 16:04:48.965192 1532633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 16:04:48.969267 1532633 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 16:04:48.969332 1532633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 16:04:49.010426 1532633 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 16:04:49.019213 1532633 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:04:49.026990 1532633 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 16:04:49.034610 1532633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:04:49.038616 1532633 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:04:49.038700 1532633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:04:49.079625 1532633 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 16:04:49.092345 1532633 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 16:04:49.097174 1532633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 16:04:49.138992 1532633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 16:04:49.179959 1532633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 16:04:49.220981 1532633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 16:04:49.263836 1532633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 16:04:49.305100 1532633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 16:04:49.346214 1532633 kubeadm.go:401] StartCluster: {Name:no-preload-439544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:04:49.346315 1532633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 16:04:49.346388 1532633 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 16:04:49.374870 1532633 cri.go:89] found id: ""
	I1213 16:04:49.374958 1532633 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 16:04:49.382718 1532633 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 16:04:49.382749 1532633 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 16:04:49.382843 1532633 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 16:04:49.392071 1532633 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 16:04:49.392512 1532633 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-439544" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:04:49.392626 1532633 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-1251074/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-439544" cluster setting kubeconfig missing "no-preload-439544" context setting]
	I1213 16:04:49.392945 1532633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:04:49.395692 1532633 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 16:04:49.403908 1532633 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 16:04:49.403991 1532633 kubeadm.go:602] duration metric: took 21.234385ms to restartPrimaryControlPlane
	I1213 16:04:49.404014 1532633 kubeadm.go:403] duration metric: took 57.808126ms to StartCluster
	I1213 16:04:49.404029 1532633 settings.go:142] acquiring lock: {Name:mk3e38eb9635dd950ddf8081cc0598a8c10049c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:04:49.404097 1532633 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:04:49.404746 1532633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:04:49.404991 1532633 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 16:04:49.405373 1532633 config.go:182] Loaded profile config "no-preload-439544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:04:49.405453 1532633 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 16:04:49.405529 1532633 addons.go:70] Setting storage-provisioner=true in profile "no-preload-439544"
	I1213 16:04:49.405551 1532633 addons.go:239] Setting addon storage-provisioner=true in "no-preload-439544"
	I1213 16:04:49.405574 1532633 host.go:66] Checking if "no-preload-439544" exists ...
	I1213 16:04:49.405617 1532633 addons.go:70] Setting dashboard=true in profile "no-preload-439544"
	I1213 16:04:49.405653 1532633 addons.go:239] Setting addon dashboard=true in "no-preload-439544"
	W1213 16:04:49.405672 1532633 addons.go:248] addon dashboard should already be in state true
	I1213 16:04:49.405720 1532633 host.go:66] Checking if "no-preload-439544" exists ...
	I1213 16:04:49.406068 1532633 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 16:04:49.406504 1532633 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 16:04:49.406575 1532633 addons.go:70] Setting default-storageclass=true in profile "no-preload-439544"
	I1213 16:04:49.406600 1532633 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-439544"
	I1213 16:04:49.406887 1532633 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 16:04:49.410533 1532633 out.go:179] * Verifying Kubernetes components...
	I1213 16:04:49.413615 1532633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:04:49.447417 1532633 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 16:04:49.451069 1532633 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:04:49.451101 1532633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 16:04:49.451201 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:49.463790 1532633 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 16:04:49.466503 1532633 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 16:04:49.473300 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 16:04:49.473383 1532633 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 16:04:49.473493 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:49.479179 1532633 addons.go:239] Setting addon default-storageclass=true in "no-preload-439544"
	I1213 16:04:49.479230 1532633 host.go:66] Checking if "no-preload-439544" exists ...
	I1213 16:04:49.479734 1532633 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 16:04:49.522588 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:49.545446 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:49.555551 1532633 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 16:04:49.555579 1532633 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 16:04:49.555649 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:49.583737 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:49.672869 1532633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:04:49.702326 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:04:49.726116 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 16:04:49.726144 1532633 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 16:04:49.731991 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 16:04:49.746280 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 16:04:49.746304 1532633 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 16:04:49.759419 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 16:04:49.759445 1532633 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 16:04:49.773846 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 16:04:49.773922 1532633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 16:04:49.788446 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 16:04:49.788520 1532633 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 16:04:49.801996 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 16:04:49.802073 1532633 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 16:04:49.815387 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 16:04:49.815464 1532633 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 16:04:49.828609 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 16:04:49.828684 1532633 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 16:04:49.862172 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:04:49.862245 1532633 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 16:04:49.898115 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:04:50.335585 1532633 node_ready.go:35] waiting up to 6m0s for node "no-preload-439544" to be "Ready" ...
	W1213 16:04:50.335668 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.335706 1532633 retry.go:31] will retry after 254.843686ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:50.335826 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.335840 1532633 retry.go:31] will retry after 189.333653ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:50.336064 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.336084 1532633 retry.go:31] will retry after 239.72839ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.525319 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 16:04:50.576944 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:04:50.591356 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:50.603642 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.603688 1532633 retry.go:31] will retry after 288.501165ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:50.701103 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.701138 1532633 retry.go:31] will retry after 467.260982ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:50.701217 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.701231 1532633 retry.go:31] will retry after 509.7977ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.893390 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:04:50.954719 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.954753 1532633 retry.go:31] will retry after 738.142646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.169190 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:04:51.211722 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:51.245032 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.245067 1532633 retry.go:31] will retry after 783.746721ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:51.279035 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.279081 1532633 retry.go:31] will retry after 291.424758ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.570765 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:51.626988 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.627029 1532633 retry.go:31] will retry after 1.041042015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.693422 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:04:51.750389 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.750422 1532633 retry.go:31] will retry after 685.062417ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:52.029491 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:04:52.108797 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:52.108902 1532633 retry.go:31] will retry after 939.299233ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:52.336815 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:04:52.436241 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:04:52.496715 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:52.496747 1532633 retry.go:31] will retry after 1.433097098s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:52.669004 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:52.730009 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:52.730041 1532633 retry.go:31] will retry after 640.138294ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:53.049072 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:04:53.112314 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:53.112422 1532633 retry.go:31] will retry after 1.734157912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:53.371175 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:53.437917 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:53.437956 1532633 retry.go:31] will retry after 2.49121489s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:53.930071 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:04:53.986900 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:53.986935 1532633 retry.go:31] will retry after 2.048688298s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:54.336885 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:04:54.847106 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:04:54.923019 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:54.923054 1532633 retry.go:31] will retry after 2.142030138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:55.930227 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:55.990258 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:55.990294 1532633 retry.go:31] will retry after 2.707811037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:56.036521 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:04:56.097317 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:56.097352 1532633 retry.go:31] will retry after 2.146665141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:56.836913 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:04:57.065333 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:04:57.147079 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:57.147117 1532633 retry.go:31] will retry after 3.792914481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:58.244261 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:04:58.304505 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:58.304538 1532633 retry.go:31] will retry after 3.360821909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:58.698362 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:58.754622 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:58.754653 1532633 retry.go:31] will retry after 5.541004931s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:59.336144 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:00.940480 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:05:01.003756 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:01.003802 1532633 retry.go:31] will retry after 2.96874462s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:01.336264 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:01.665917 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:05:01.728242 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:01.728275 1532633 retry.go:31] will retry after 8.916729655s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:03.336811 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:03.973522 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:05:04.037741 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:04.037776 1532633 retry.go:31] will retry after 6.210277542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:04.296383 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:05:04.360008 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:04.360045 1532633 retry.go:31] will retry after 7.195036005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:05.337054 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:07.836826 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:09.837041 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:10.248588 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:05:10.313237 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:10.313283 1532633 retry.go:31] will retry after 8.934777878s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:10.646200 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:05:10.705656 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:10.705690 1532633 retry.go:31] will retry after 12.190283501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:11.555705 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:05:11.661890 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:11.661924 1532633 retry.go:31] will retry after 5.300472002s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:12.336810 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:14.336968 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:16.337075 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:16.963159 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:05:17.023434 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:17.023464 1532633 retry.go:31] will retry after 7.246070268s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:18.836178 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:19.248832 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:05:19.312969 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:19.313003 1532633 retry.go:31] will retry after 13.568837967s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:20.836857 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:22.896385 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:05:22.954841 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:22.954869 1532633 retry.go:31] will retry after 19.284270803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:23.336898 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:24.270582 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:05:24.330461 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:24.330496 1532633 retry.go:31] will retry after 25.107997507s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:25.836832 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:27.837099 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:29.837229 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:32.337006 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:32.882520 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:05:32.944328 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:32.944368 1532633 retry.go:31] will retry after 16.148859129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:34.836937 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:37.337064 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:39.837056 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:42.239525 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:05:42.310135 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:42.310173 1532633 retry.go:31] will retry after 15.456030755s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:42.336738 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:44.336942 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:46.337118 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:48.836877 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:49.094336 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:05:49.194140 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:49.194179 1532633 retry.go:31] will retry after 37.565219756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:49.439413 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:05:49.497701 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:49.497737 1532633 retry.go:31] will retry after 28.907874152s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:51.336848 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:53.836235 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:55.836809 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:57.766432 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:05:57.827035 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:57.827069 1532633 retry.go:31] will retry after 21.817184299s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:58.336352 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:00.336702 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:02.337038 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:04.836820 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:06.836996 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:08.837192 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:11.337013 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:13.836907 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:16.336156 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:18.336864 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:06:18.406172 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:06:18.467162 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:06:18.467195 1532633 retry.go:31] will retry after 30.701956357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:06:19.645168 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:06:19.709360 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:06:19.709466 1532633 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 16:06:20.336963 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:22.337091 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:24.836902 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:06:26.760577 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:06:26.824828 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:06:26.824933 1532633 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 16:06:27.336805 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:06:27.802892 1527131 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001194483s
	I1213 16:06:27.802923 1527131 kubeadm.go:319] 
	I1213 16:06:27.803273 1527131 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 16:06:27.803399 1527131 kubeadm.go:319] 	- The kubelet is not running
	I1213 16:06:27.803765 1527131 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 16:06:27.803775 1527131 kubeadm.go:319] 
	I1213 16:06:27.803981 1527131 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 16:06:27.804042 1527131 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 16:06:27.804098 1527131 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 16:06:27.804106 1527131 kubeadm.go:319] 
	I1213 16:06:27.809079 1527131 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 16:06:27.809540 1527131 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 16:06:27.809697 1527131 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 16:06:27.810128 1527131 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 16:06:27.810147 1527131 kubeadm.go:319] 
	I1213 16:06:27.810227 1527131 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 16:06:27.810425 1527131 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-526531] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-526531] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001194483s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 16:06:27.810556 1527131 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 16:06:28.218967 1527131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 16:06:28.233104 1527131 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 16:06:28.233179 1527131 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 16:06:28.241250 1527131 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 16:06:28.241272 1527131 kubeadm.go:158] found existing configuration files:
	
	I1213 16:06:28.241325 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 16:06:28.249399 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 16:06:28.249464 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 16:06:28.257096 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 16:06:28.265010 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 16:06:28.265075 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 16:06:28.273325 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 16:06:28.281364 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 16:06:28.281443 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 16:06:28.289177 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 16:06:28.297335 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 16:06:28.297406 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 16:06:28.305336 1527131 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 16:06:28.346459 1527131 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 16:06:28.346706 1527131 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 16:06:28.412526 1527131 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 16:06:28.412656 1527131 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 16:06:28.412720 1527131 kubeadm.go:319] OS: Linux
	I1213 16:06:28.412796 1527131 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 16:06:28.412874 1527131 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 16:06:28.412953 1527131 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 16:06:28.413023 1527131 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 16:06:28.413091 1527131 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 16:06:28.413171 1527131 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 16:06:28.413247 1527131 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 16:06:28.413330 1527131 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 16:06:28.413409 1527131 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 16:06:28.487502 1527131 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 16:06:28.487768 1527131 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 16:06:28.487886 1527131 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 16:06:28.493209 1527131 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 16:06:28.498603 1527131 out.go:252]   - Generating certificates and keys ...
	I1213 16:06:28.498777 1527131 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 16:06:28.498875 1527131 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 16:06:28.498987 1527131 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 16:06:28.499079 1527131 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 16:06:28.499178 1527131 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 16:06:28.499261 1527131 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 16:06:28.499387 1527131 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 16:06:28.499489 1527131 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 16:06:28.499597 1527131 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 16:06:28.499699 1527131 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 16:06:28.499765 1527131 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 16:06:28.499849 1527131 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 16:06:28.647459 1527131 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 16:06:28.854581 1527131 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 16:06:29.198188 1527131 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 16:06:29.369603 1527131 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 16:06:29.759796 1527131 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 16:06:29.760686 1527131 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 16:06:29.763405 1527131 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 16:06:29.766742 1527131 out.go:252]   - Booting up control plane ...
	I1213 16:06:29.766921 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 16:06:29.767060 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 16:06:29.767160 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 16:06:29.788844 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 16:06:29.789113 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 16:06:29.796997 1527131 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 16:06:29.797476 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 16:06:29.797700 1527131 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 16:06:29.934060 1527131 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 16:06:29.934180 1527131 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1213 16:06:29.836878 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:32.336819 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:34.336911 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:36.836814 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:38.837068 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:41.336826 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:43.336942 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:45.836809 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:47.836978 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:06:49.169418 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:06:49.229366 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:06:49.229477 1532633 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 16:06:49.233284 1532633 out.go:179] * Enabled addons: 
	I1213 16:06:49.236115 1532633 addons.go:530] duration metric: took 1m59.83066349s for enable addons: enabled=[]
	W1213 16:06:50.336853 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:52.836975 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:55.336982 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:57.836819 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:59.837077 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:02.336884 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:04.337044 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:06.836907 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:09.336829 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:11.836902 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:13.836966 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:16.336991 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:18.836964 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:21.336861 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:23.336994 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:25.337136 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:27.837080 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:30.336834 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:32.336947 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:34.337009 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:36.836927 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:39.336872 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:41.836269 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:43.836773 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:45.837030 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:47.837167 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:50.336908 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:52.336995 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:54.836850 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:56.837113 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:59.336907 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:01.836519 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:03.836935 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:05.837188 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:08.336182 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:10.336290 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:12.836188 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:14.837007 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:17.336926 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:19.337137 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:21.836823 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:23.836887 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:26.336902 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:28.836875 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:30.837155 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:33.344927 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:35.836197 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:38.336221 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:40.336266 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:42.336937 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:44.837052 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:47.336949 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:49.337721 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:51.836216 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:54.336802 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:56.337015 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:58.337101 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:00.340034 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:02.837190 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:05.337007 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:07.836179 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:09.836379 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:12.336811 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:14.337024 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:16.836875 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:18.836958 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:21.336809 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:23.337044 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:25.337144 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:27.837183 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:30.336838 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:32.336966 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:34.836253 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:36.837105 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:39.336929 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:41.836911 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:44.336936 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:46.336992 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:48.837015 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:51.336072 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:53.336374 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:55.836834 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:57.837117 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:59.837157 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:02.336184 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:04.336871 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:06.336975 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:08.836835 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:10.836923 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:12.837238 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:15.336203 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:17.337025 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:19.837094 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:22.336928 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:24.836175 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:26.836947 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:10:29.934181 1527131 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000175102s
	I1213 16:10:29.934219 1527131 kubeadm.go:319] 
	I1213 16:10:29.934278 1527131 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 16:10:29.934315 1527131 kubeadm.go:319] 	- The kubelet is not running
	I1213 16:10:29.934420 1527131 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 16:10:29.934431 1527131 kubeadm.go:319] 
	I1213 16:10:29.934571 1527131 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 16:10:29.934616 1527131 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 16:10:29.934646 1527131 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 16:10:29.934653 1527131 kubeadm.go:319] 
	I1213 16:10:29.939000 1527131 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 16:10:29.939475 1527131 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 16:10:29.939605 1527131 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 16:10:29.939919 1527131 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 16:10:29.939941 1527131 kubeadm.go:319] 
	I1213 16:10:29.940021 1527131 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 16:10:29.940103 1527131 kubeadm.go:403] duration metric: took 8m6.466581637s to StartCluster
	I1213 16:10:29.940140 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:10:29.940207 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:10:29.965453 1527131 cri.go:89] found id: ""
	I1213 16:10:29.965477 1527131 logs.go:282] 0 containers: []
	W1213 16:10:29.965487 1527131 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:10:29.965493 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:10:29.965556 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:10:29.991522 1527131 cri.go:89] found id: ""
	I1213 16:10:29.991547 1527131 logs.go:282] 0 containers: []
	W1213 16:10:29.991560 1527131 logs.go:284] No container was found matching "etcd"
	I1213 16:10:29.991566 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:10:29.991628 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:10:30.032969 1527131 cri.go:89] found id: ""
	I1213 16:10:30.032993 1527131 logs.go:282] 0 containers: []
	W1213 16:10:30.033002 1527131 logs.go:284] No container was found matching "coredns"
	I1213 16:10:30.033008 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:10:30.033087 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:10:30.086903 1527131 cri.go:89] found id: ""
	I1213 16:10:30.086929 1527131 logs.go:282] 0 containers: []
	W1213 16:10:30.086937 1527131 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:10:30.086944 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:10:30.087018 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:10:30.120054 1527131 cri.go:89] found id: ""
	I1213 16:10:30.120085 1527131 logs.go:282] 0 containers: []
	W1213 16:10:30.120097 1527131 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:10:30.120106 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:10:30.120179 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:10:30.147481 1527131 cri.go:89] found id: ""
	I1213 16:10:30.147512 1527131 logs.go:282] 0 containers: []
	W1213 16:10:30.147521 1527131 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:10:30.147528 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:10:30.147597 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:10:30.175161 1527131 cri.go:89] found id: ""
	I1213 16:10:30.175192 1527131 logs.go:282] 0 containers: []
	W1213 16:10:30.175202 1527131 logs.go:284] No container was found matching "kindnet"
	I1213 16:10:30.175212 1527131 logs.go:123] Gathering logs for kubelet ...
	I1213 16:10:30.175227 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:10:30.236323 1527131 logs.go:123] Gathering logs for dmesg ...
	I1213 16:10:30.236366 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:10:30.252852 1527131 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:10:30.252882 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:10:30.323930 1527131 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:10:30.308479    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.309175    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.310915    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.318127    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.318780    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:10:30.308479    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.309175    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.310915    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.318127    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.318780    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:10:30.323954 1527131 logs.go:123] Gathering logs for containerd ...
	I1213 16:10:30.323966 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:10:30.363277 1527131 logs.go:123] Gathering logs for container status ...
	I1213 16:10:30.363323 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 16:10:30.390658 1527131 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000175102s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 16:10:30.390707 1527131 out.go:285] * 
	W1213 16:10:30.390758 1527131 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000175102s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 16:10:30.390773 1527131 out.go:285] * 
	W1213 16:10:30.392934 1527131 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 16:10:30.397735 1527131 out.go:203] 
	W1213 16:10:30.401437 1527131 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000175102s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 16:10:30.401483 1527131 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 16:10:30.401510 1527131 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 16:10:30.404721 1527131 out.go:203] 
	W1213 16:10:28.837006 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:31.336932 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:33.836168 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:35.836828 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:37.837072 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:39.837209 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:42.337374 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:44.836865 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:47.336935 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:49.836264 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:10:50.335908 1532633 node_ready.go:38] duration metric: took 6m0.000276074s for node "no-preload-439544" to be "Ready" ...
	I1213 16:10:50.339158 1532633 out.go:203] 
	W1213 16:10:50.342306 1532633 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 16:10:50.342341 1532633 out.go:285] * 
	W1213 16:10:50.344947 1532633 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 16:10:50.347878 1532633 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216398345Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216470499Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216572930Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216649974Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216720996Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216786135Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216843479Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216912187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216985974Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.217088848Z" level=info msg="Connect containerd service"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.217463198Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.218120758Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.231205084Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.231272274Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.231345659Z" level=info msg="Start subscribing containerd event"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.231396062Z" level=info msg="Start recovering state"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.254976526Z" level=info msg="Start event monitor"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.255192266Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.255261828Z" level=info msg="Start streaming server"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.255422735Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.255487619Z" level=info msg="runtime interface starting up..."
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.255541731Z" level=info msg="starting plugins..."
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.255628375Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 16:04:48 no-preload-439544 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.257678755Z" level=info msg="containerd successfully booted in 0.068392s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:10:51.531708    3980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:51.532267    3980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:51.534035    3980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:51.534515    3980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:51.536203    3980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.759368] overlayfs: idmapped layers are currently not supported
	[Dec13 13:37] overlayfs: idmapped layers are currently not supported
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 16:10:51 up  7:53,  0 user,  load average: 0.36, 0.58, 1.21
	Linux no-preload-439544 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 16:10:48 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:10:48 no-preload-439544 kubelet[3859]: E1213 16:10:48.642170    3859 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:10:48 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:10:48 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:10:49 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 480.
	Dec 13 16:10:49 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:10:49 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:10:49 no-preload-439544 kubelet[3865]: E1213 16:10:49.407033    3865 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:10:49 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:10:49 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:10:50 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 13 16:10:50 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:10:50 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:10:50 no-preload-439544 kubelet[3871]: E1213 16:10:50.144611    3871 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:10:50 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:10:50 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:10:50 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 13 16:10:50 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:10:50 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:10:50 no-preload-439544 kubelet[3892]: E1213 16:10:50.958771    3892 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:10:50 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:10:50 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:10:51 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 13 16:10:51 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:10:51 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-439544 -n no-preload-439544
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-439544 -n no-preload-439544: exit status 2 (359.78856ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-439544" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (369.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (100.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-526531 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1213 16:10:38.303933 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/default-k8s-diff-port-946932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-526531 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m38.459932022s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-526531 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-526531
helpers_test.go:244: (dbg) docker inspect newest-cni-526531:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54",
	        "Created": "2025-12-13T16:02:15.548035148Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1527552,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T16:02:15.61154228Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54/hostname",
	        "HostsPath": "/var/lib/docker/containers/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54/hosts",
	        "LogPath": "/var/lib/docker/containers/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54-json.log",
	        "Name": "/newest-cni-526531",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-526531:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-526531",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54",
	                "LowerDir": "/var/lib/docker/overlay2/3024c4b0e975ebb8657bae012179f06255ab85cd84ba26121505b6c54622bc5d-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3024c4b0e975ebb8657bae012179f06255ab85cd84ba26121505b6c54622bc5d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3024c4b0e975ebb8657bae012179f06255ab85cd84ba26121505b6c54622bc5d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3024c4b0e975ebb8657bae012179f06255ab85cd84ba26121505b6c54622bc5d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-526531",
	                "Source": "/var/lib/docker/volumes/newest-cni-526531/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-526531",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-526531",
	                "name.minikube.sigs.k8s.io": "newest-cni-526531",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4bfa296b8ce5b9a9521ebc2c98193f9318423ba22bf82448755a60c700c13c19",
	            "SandboxKey": "/var/run/docker/netns/4bfa296b8ce5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34223"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34224"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34227"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34225"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34226"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-526531": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:63:98:58:f5:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ae0d89b977ec0aa4cc17943d84decbf5f3cf47ff39573e4d4fdb9e9873e2828c",
	                    "EndpointID": "f95fa4c05c60c14b35da98f9b531c20fc8d91ab1572e72ada9f86ed1f99d4e1e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-526531",
	                        "dd2af60ccebf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-526531 -n newest-cni-526531
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-526531 -n newest-cni-526531: exit status 6 (317.577488ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 16:12:10.950734 1541822 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-526531" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-526531 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ stop    │ -p embed-certs-270324 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:57 UTC │ 13 Dec 25 15:58 UTC │
	│ addons  │ enable dashboard -p embed-certs-270324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:58 UTC │ 13 Dec 25 15:58 UTC │
	│ start   │ -p embed-certs-270324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:58 UTC │ 13 Dec 25 15:58 UTC │
	│ image   │ embed-certs-270324 image list --format=json                                                                                                                                                                                                                │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ pause   │ -p embed-certs-270324 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ unpause │ -p embed-certs-270324 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p embed-certs-270324                                                                                                                                                                                                                                      │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p embed-certs-270324                                                                                                                                                                                                                                      │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p disable-driver-mounts-614298                                                                                                                                                                                                                            │ disable-driver-mounts-614298 │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ start   │ -p default-k8s-diff-port-946932 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 16:00 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-946932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:00 UTC │ 13 Dec 25 16:00 UTC │
	│ stop    │ -p default-k8s-diff-port-946932 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:00 UTC │ 13 Dec 25 16:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-946932 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:01 UTC │ 13 Dec 25 16:01 UTC │
	│ start   │ -p default-k8s-diff-port-946932 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:01 UTC │ 13 Dec 25 16:01 UTC │
	│ image   │ default-k8s-diff-port-946932 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ pause   │ -p default-k8s-diff-port-946932 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ unpause │ -p default-k8s-diff-port-946932 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ delete  │ -p default-k8s-diff-port-946932                                                                                                                                                                                                                            │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ delete  │ -p default-k8s-diff-port-946932                                                                                                                                                                                                                            │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ start   │ -p newest-cni-526531 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-439544 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │                     │
	│ stop    │ -p no-preload-439544 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:04 UTC │ 13 Dec 25 16:04 UTC │
	│ addons  │ enable dashboard -p no-preload-439544 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:04 UTC │ 13 Dec 25 16:04 UTC │
	│ start   │ -p no-preload-439544 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:04 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-526531 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:10 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 16:04:42
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 16:04:42.413194 1532633 out.go:360] Setting OutFile to fd 1 ...
	I1213 16:04:42.413307 1532633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:04:42.413317 1532633 out.go:374] Setting ErrFile to fd 2...
	I1213 16:04:42.413323 1532633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:04:42.413567 1532633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 16:04:42.413904 1532633 out.go:368] Setting JSON to false
	I1213 16:04:42.414786 1532633 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":28031,"bootTime":1765613851,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 16:04:42.414858 1532633 start.go:143] virtualization:  
	I1213 16:04:42.417845 1532633 out.go:179] * [no-preload-439544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 16:04:42.421555 1532633 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 16:04:42.421640 1532633 notify.go:221] Checking for updates...
	I1213 16:04:42.427687 1532633 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 16:04:42.430499 1532633 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:04:42.433392 1532633 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 16:04:42.436121 1532633 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 16:04:42.439040 1532633 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 16:04:42.442494 1532633 config.go:182] Loaded profile config "no-preload-439544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:04:42.443099 1532633 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 16:04:42.466960 1532633 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 16:04:42.467080 1532633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:04:42.529333 1532633 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:04:42.520259632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:04:42.529443 1532633 docker.go:319] overlay module found
	I1213 16:04:42.532652 1532633 out.go:179] * Using the docker driver based on existing profile
	I1213 16:04:42.535539 1532633 start.go:309] selected driver: docker
	I1213 16:04:42.535559 1532633 start.go:927] validating driver "docker" against &{Name:no-preload-439544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:04:42.535665 1532633 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 16:04:42.536328 1532633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:04:42.590849 1532633 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:04:42.581095747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:04:42.591180 1532633 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 16:04:42.591218 1532633 cni.go:84] Creating CNI manager for ""
	I1213 16:04:42.591273 1532633 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:04:42.591342 1532633 start.go:353] cluster config:
	{Name:no-preload-439544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:04:42.596381 1532633 out.go:179] * Starting "no-preload-439544" primary control-plane node in "no-preload-439544" cluster
	I1213 16:04:42.599266 1532633 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 16:04:42.602152 1532633 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 16:04:42.604937 1532633 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:04:42.605025 1532633 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 16:04:42.605107 1532633 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/config.json ...
	I1213 16:04:42.605412 1532633 cache.go:107] acquiring lock: {Name:mk6458bc7297def26ffc87aa852ed603976a017c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605492 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1213 16:04:42.605501 1532633 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.253µs
	I1213 16:04:42.605513 1532633 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1213 16:04:42.605528 1532633 cache.go:107] acquiring lock: {Name:mk04216f72d0f7cd3d2308def830acac11c8b85d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605561 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1213 16:04:42.605566 1532633 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 43.305µs
	I1213 16:04:42.605573 1532633 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1213 16:04:42.605582 1532633 cache.go:107] acquiring lock: {Name:mk2054b1540f1c54f9b25f5f78ec681c8220cfcf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605608 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1213 16:04:42.605613 1532633 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 31.647µs
	I1213 16:04:42.605619 1532633 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1213 16:04:42.605629 1532633 cache.go:107] acquiring lock: {Name:mke9c9289e43b08c6e721f866225f618ba3afddf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605654 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1213 16:04:42.605660 1532633 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 31.704µs
	I1213 16:04:42.605665 1532633 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1213 16:04:42.605674 1532633 cache.go:107] acquiring lock: {Name:mkd9f47dfe476ebd2c352fdee514a99c9fba7295 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605698 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1213 16:04:42.605703 1532633 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.621µs
	I1213 16:04:42.605709 1532633 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1213 16:04:42.605719 1532633 cache.go:107] acquiring lock: {Name:mkecf0483a10d405cf273c97b7180611bb889c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605749 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1213 16:04:42.605754 1532633 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 35.872µs
	I1213 16:04:42.605759 1532633 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1213 16:04:42.605768 1532633 cache.go:107] acquiring lock: {Name:mkb08190a177fa29b2e45167b12d4742acf808cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605793 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1213 16:04:42.605798 1532633 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 31.294µs
	I1213 16:04:42.605804 1532633 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1213 16:04:42.605812 1532633 cache.go:107] acquiring lock: {Name:mk18c875751b02ce01ad21e18c1d2a3a9ed5d930 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.605845 1532633 cache.go:115] /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1213 16:04:42.605849 1532633 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 38.415µs
	I1213 16:04:42.605855 1532633 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1213 16:04:42.605861 1532633 cache.go:87] Successfully saved all images to host disk.
	I1213 16:04:42.624275 1532633 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 16:04:42.624299 1532633 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 16:04:42.624322 1532633 cache.go:243] Successfully downloaded all kic artifacts
	I1213 16:04:42.624352 1532633 start.go:360] acquireMachinesLock for no-preload-439544: {Name:mk6eb67fc85c056d1917e38b306c3e4e0ae30393 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:04:42.624426 1532633 start.go:364] duration metric: took 45.578µs to acquireMachinesLock for "no-preload-439544"
	I1213 16:04:42.624452 1532633 start.go:96] Skipping create...Using existing machine configuration
	I1213 16:04:42.624458 1532633 fix.go:54] fixHost starting: 
	I1213 16:04:42.624729 1532633 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 16:04:42.641391 1532633 fix.go:112] recreateIfNeeded on no-preload-439544: state=Stopped err=<nil>
	W1213 16:04:42.641430 1532633 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 16:04:42.644748 1532633 out.go:252] * Restarting existing docker container for "no-preload-439544" ...
	I1213 16:04:42.644834 1532633 cli_runner.go:164] Run: docker start no-preload-439544
	I1213 16:04:42.892931 1532633 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 16:04:42.919215 1532633 kic.go:430] container "no-preload-439544" state is running.
	I1213 16:04:42.919778 1532633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-439544
	I1213 16:04:42.944557 1532633 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/config.json ...
	I1213 16:04:42.944781 1532633 machine.go:94] provisionDockerMachine start ...
	I1213 16:04:42.944844 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:42.967340 1532633 main.go:143] libmachine: Using SSH client type: native
	I1213 16:04:42.967676 1532633 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34228 <nil> <nil>}
	I1213 16:04:42.967688 1532633 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 16:04:42.968381 1532633 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46966->127.0.0.1:34228: read: connection reset by peer
	I1213 16:04:46.127864 1532633 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-439544
	
	I1213 16:04:46.127889 1532633 ubuntu.go:182] provisioning hostname "no-preload-439544"
	I1213 16:04:46.127971 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:46.150540 1532633 main.go:143] libmachine: Using SSH client type: native
	I1213 16:04:46.150873 1532633 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34228 <nil> <nil>}
	I1213 16:04:46.150890 1532633 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-439544 && echo "no-preload-439544" | sudo tee /etc/hostname
	I1213 16:04:46.316630 1532633 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-439544
	
	I1213 16:04:46.316724 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:46.334085 1532633 main.go:143] libmachine: Using SSH client type: native
	I1213 16:04:46.334398 1532633 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34228 <nil> <nil>}
	I1213 16:04:46.334425 1532633 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-439544' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-439544/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-439544' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 16:04:46.483606 1532633 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 16:04:46.483691 1532633 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 16:04:46.483736 1532633 ubuntu.go:190] setting up certificates
	I1213 16:04:46.483755 1532633 provision.go:84] configureAuth start
	I1213 16:04:46.483823 1532633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-439544
	I1213 16:04:46.500162 1532633 provision.go:143] copyHostCerts
	I1213 16:04:46.500243 1532633 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 16:04:46.500259 1532633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 16:04:46.500337 1532633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 16:04:46.500448 1532633 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 16:04:46.500465 1532633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 16:04:46.500494 1532633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 16:04:46.500550 1532633 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 16:04:46.500561 1532633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 16:04:46.500585 1532633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 16:04:46.500639 1532633 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.no-preload-439544 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-439544]
	I1213 16:04:46.571887 1532633 provision.go:177] copyRemoteCerts
	I1213 16:04:46.571964 1532633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 16:04:46.572031 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:46.590720 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:46.699229 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 16:04:46.717692 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 16:04:46.736074 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 16:04:46.754498 1532633 provision.go:87] duration metric: took 270.718838ms to configureAuth
	I1213 16:04:46.754524 1532633 ubuntu.go:206] setting minikube options for container-runtime
	I1213 16:04:46.754723 1532633 config.go:182] Loaded profile config "no-preload-439544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:04:46.754730 1532633 machine.go:97] duration metric: took 3.809941558s to provisionDockerMachine
	I1213 16:04:46.754738 1532633 start.go:293] postStartSetup for "no-preload-439544" (driver="docker")
	I1213 16:04:46.754749 1532633 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 16:04:46.754799 1532633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 16:04:46.754840 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:46.773059 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:46.881154 1532633 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 16:04:46.885885 1532633 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 16:04:46.885916 1532633 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 16:04:46.885927 1532633 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 16:04:46.885987 1532633 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 16:04:46.886081 1532633 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 16:04:46.886202 1532633 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 16:04:46.895826 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:04:46.914821 1532633 start.go:296] duration metric: took 160.067146ms for postStartSetup
	I1213 16:04:46.914943 1532633 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 16:04:46.915004 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:46.933638 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:47.036731 1532633 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 16:04:47.041916 1532633 fix.go:56] duration metric: took 4.417449466s for fixHost
	I1213 16:04:47.041955 1532633 start.go:83] releasing machines lock for "no-preload-439544", held for 4.417501354s
	I1213 16:04:47.042027 1532633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-439544
	I1213 16:04:47.059436 1532633 ssh_runner.go:195] Run: cat /version.json
	I1213 16:04:47.059506 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:47.059506 1532633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 16:04:47.059564 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:47.084535 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:47.085394 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:47.187879 1532633 ssh_runner.go:195] Run: systemctl --version
	I1213 16:04:47.277224 1532633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 16:04:47.281744 1532633 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 16:04:47.281868 1532633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 16:04:47.289697 1532633 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 16:04:47.289723 1532633 start.go:496] detecting cgroup driver to use...
	I1213 16:04:47.289772 1532633 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 16:04:47.289839 1532633 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 16:04:47.306480 1532633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 16:04:47.320548 1532633 docker.go:218] disabling cri-docker service (if available) ...
	I1213 16:04:47.320616 1532633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 16:04:47.336688 1532633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 16:04:47.350304 1532633 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 16:04:47.479878 1532633 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 16:04:47.617602 1532633 docker.go:234] disabling docker service ...
	I1213 16:04:47.617669 1532633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 16:04:47.636022 1532633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 16:04:47.651078 1532633 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 16:04:47.763618 1532633 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 16:04:47.889857 1532633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 16:04:47.903250 1532633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 16:04:47.917785 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 16:04:47.928047 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 16:04:47.937137 1532633 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 16:04:47.937223 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 16:04:47.946706 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:04:47.956145 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 16:04:47.964976 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:04:47.973942 1532633 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 16:04:47.982426 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 16:04:47.991145 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 16:04:48.000472 1532633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 16:04:48.013270 1532633 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 16:04:48.021912 1532633 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 16:04:48.030401 1532633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:04:48.154042 1532633 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 16:04:48.258872 1532633 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 16:04:48.258948 1532633 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 16:04:48.262883 1532633 start.go:564] Will wait 60s for crictl version
	I1213 16:04:48.262950 1532633 ssh_runner.go:195] Run: which crictl
	I1213 16:04:48.266721 1532633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 16:04:48.292243 1532633 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 16:04:48.292316 1532633 ssh_runner.go:195] Run: containerd --version
	I1213 16:04:48.313344 1532633 ssh_runner.go:195] Run: containerd --version
	I1213 16:04:48.341964 1532633 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 16:04:48.344943 1532633 cli_runner.go:164] Run: docker network inspect no-preload-439544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 16:04:48.371046 1532633 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 16:04:48.375277 1532633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:04:48.399899 1532633 kubeadm.go:884] updating cluster {Name:no-preload-439544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 16:04:48.400017 1532633 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:04:48.400067 1532633 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:04:48.428371 1532633 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:04:48.428396 1532633 cache_images.go:86] Images are preloaded, skipping loading
	I1213 16:04:48.428408 1532633 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 16:04:48.428505 1532633 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-439544 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 16:04:48.428573 1532633 ssh_runner.go:195] Run: sudo crictl info
	I1213 16:04:48.457647 1532633 cni.go:84] Creating CNI manager for ""
	I1213 16:04:48.457673 1532633 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:04:48.457695 1532633 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 16:04:48.457722 1532633 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-439544 NodeName:no-preload-439544 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 16:04:48.457839 1532633 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-439544"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 16:04:48.457908 1532633 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 16:04:48.465484 1532633 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 16:04:48.465565 1532633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 16:04:48.473169 1532633 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 16:04:48.486257 1532633 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 16:04:48.498821 1532633 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 16:04:48.514097 1532633 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 16:04:48.518017 1532633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:04:48.528671 1532633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:04:48.641355 1532633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:04:48.658852 1532633 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544 for IP: 192.168.85.2
	I1213 16:04:48.658874 1532633 certs.go:195] generating shared ca certs ...
	I1213 16:04:48.658891 1532633 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:04:48.659056 1532633 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 16:04:48.659112 1532633 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 16:04:48.659125 1532633 certs.go:257] generating profile certs ...
	I1213 16:04:48.659257 1532633 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/client.key
	I1213 16:04:48.659352 1532633 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/apiserver.key.75137389
	I1213 16:04:48.659412 1532633 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/proxy-client.key
	I1213 16:04:48.659543 1532633 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 16:04:48.659584 1532633 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 16:04:48.659597 1532633 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 16:04:48.659638 1532633 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 16:04:48.659667 1532633 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 16:04:48.659704 1532633 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 16:04:48.659762 1532633 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:04:48.660460 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 16:04:48.678510 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 16:04:48.696835 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 16:04:48.715192 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 16:04:48.736544 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 16:04:48.754814 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 16:04:48.773396 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 16:04:48.791284 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 16:04:48.809761 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 16:04:48.827867 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 16:04:48.845597 1532633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 16:04:48.862990 1532633 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 16:04:48.875844 1532633 ssh_runner.go:195] Run: openssl version
	I1213 16:04:48.882335 1532633 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 16:04:48.889759 1532633 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 16:04:48.897307 1532633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 16:04:48.901108 1532633 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 16:04:48.901221 1532633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 16:04:48.942179 1532633 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 16:04:48.949998 1532633 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 16:04:48.957450 1532633 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 16:04:48.965192 1532633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 16:04:48.969267 1532633 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 16:04:48.969332 1532633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 16:04:49.010426 1532633 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 16:04:49.019213 1532633 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:04:49.026990 1532633 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 16:04:49.034610 1532633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:04:49.038616 1532633 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:04:49.038700 1532633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:04:49.079625 1532633 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 16:04:49.092345 1532633 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 16:04:49.097174 1532633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 16:04:49.138992 1532633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 16:04:49.179959 1532633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 16:04:49.220981 1532633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 16:04:49.263836 1532633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 16:04:49.305100 1532633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 16:04:49.346214 1532633 kubeadm.go:401] StartCluster: {Name:no-preload-439544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-439544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:04:49.346315 1532633 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 16:04:49.346388 1532633 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 16:04:49.374870 1532633 cri.go:89] found id: ""
	I1213 16:04:49.374958 1532633 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 16:04:49.382718 1532633 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 16:04:49.382749 1532633 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 16:04:49.382843 1532633 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 16:04:49.392071 1532633 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 16:04:49.392512 1532633 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-439544" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:04:49.392626 1532633 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-1251074/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-439544" cluster setting kubeconfig missing "no-preload-439544" context setting]
	I1213 16:04:49.392945 1532633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:04:49.395692 1532633 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 16:04:49.403908 1532633 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 16:04:49.403991 1532633 kubeadm.go:602] duration metric: took 21.234385ms to restartPrimaryControlPlane
	I1213 16:04:49.404014 1532633 kubeadm.go:403] duration metric: took 57.808126ms to StartCluster
	I1213 16:04:49.404029 1532633 settings.go:142] acquiring lock: {Name:mk3e38eb9635dd950ddf8081cc0598a8c10049c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:04:49.404097 1532633 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:04:49.404746 1532633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:04:49.404991 1532633 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 16:04:49.405373 1532633 config.go:182] Loaded profile config "no-preload-439544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:04:49.405453 1532633 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 16:04:49.405529 1532633 addons.go:70] Setting storage-provisioner=true in profile "no-preload-439544"
	I1213 16:04:49.405551 1532633 addons.go:239] Setting addon storage-provisioner=true in "no-preload-439544"
	I1213 16:04:49.405574 1532633 host.go:66] Checking if "no-preload-439544" exists ...
	I1213 16:04:49.405617 1532633 addons.go:70] Setting dashboard=true in profile "no-preload-439544"
	I1213 16:04:49.405653 1532633 addons.go:239] Setting addon dashboard=true in "no-preload-439544"
	W1213 16:04:49.405672 1532633 addons.go:248] addon dashboard should already be in state true
	I1213 16:04:49.405720 1532633 host.go:66] Checking if "no-preload-439544" exists ...
	I1213 16:04:49.406068 1532633 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 16:04:49.406504 1532633 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 16:04:49.406575 1532633 addons.go:70] Setting default-storageclass=true in profile "no-preload-439544"
	I1213 16:04:49.406600 1532633 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-439544"
	I1213 16:04:49.406887 1532633 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 16:04:49.410533 1532633 out.go:179] * Verifying Kubernetes components...
	I1213 16:04:49.413615 1532633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:04:49.447417 1532633 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 16:04:49.451069 1532633 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:04:49.451101 1532633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 16:04:49.451201 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:49.463790 1532633 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 16:04:49.466503 1532633 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 16:04:49.473300 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 16:04:49.473383 1532633 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 16:04:49.473493 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:49.479179 1532633 addons.go:239] Setting addon default-storageclass=true in "no-preload-439544"
	I1213 16:04:49.479230 1532633 host.go:66] Checking if "no-preload-439544" exists ...
	I1213 16:04:49.479734 1532633 cli_runner.go:164] Run: docker container inspect no-preload-439544 --format={{.State.Status}}
	I1213 16:04:49.522588 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:49.545446 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:49.555551 1532633 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 16:04:49.555579 1532633 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 16:04:49.555649 1532633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-439544
	I1213 16:04:49.583737 1532633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34228 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/no-preload-439544/id_rsa Username:docker}
	I1213 16:04:49.672869 1532633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:04:49.702326 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:04:49.726116 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 16:04:49.726144 1532633 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 16:04:49.731991 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 16:04:49.746280 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 16:04:49.746304 1532633 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 16:04:49.759419 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 16:04:49.759445 1532633 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 16:04:49.773846 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 16:04:49.773922 1532633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 16:04:49.788446 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 16:04:49.788520 1532633 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 16:04:49.801996 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 16:04:49.802073 1532633 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 16:04:49.815387 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 16:04:49.815464 1532633 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 16:04:49.828609 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 16:04:49.828684 1532633 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 16:04:49.862172 1532633 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:04:49.862245 1532633 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 16:04:49.898115 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:04:50.335585 1532633 node_ready.go:35] waiting up to 6m0s for node "no-preload-439544" to be "Ready" ...
	W1213 16:04:50.335668 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.335706 1532633 retry.go:31] will retry after 254.843686ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:50.335826 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.335840 1532633 retry.go:31] will retry after 189.333653ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:50.336064 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.336084 1532633 retry.go:31] will retry after 239.72839ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.525319 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 16:04:50.576944 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:04:50.591356 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:50.603642 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.603688 1532633 retry.go:31] will retry after 288.501165ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:50.701103 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.701138 1532633 retry.go:31] will retry after 467.260982ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:50.701217 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.701231 1532633 retry.go:31] will retry after 509.7977ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.893390 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:04:50.954719 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:50.954753 1532633 retry.go:31] will retry after 738.142646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.169190 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:04:51.211722 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:51.245032 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.245067 1532633 retry.go:31] will retry after 783.746721ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:51.279035 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.279081 1532633 retry.go:31] will retry after 291.424758ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.570765 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:51.626988 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.627029 1532633 retry.go:31] will retry after 1.041042015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.693422 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:04:51.750389 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:51.750422 1532633 retry.go:31] will retry after 685.062417ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:52.029491 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:04:52.108797 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:52.108902 1532633 retry.go:31] will retry after 939.299233ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:52.336815 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:04:52.436241 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:04:52.496715 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:52.496747 1532633 retry.go:31] will retry after 1.433097098s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:52.669004 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:52.730009 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:52.730041 1532633 retry.go:31] will retry after 640.138294ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:53.049072 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:04:53.112314 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:53.112422 1532633 retry.go:31] will retry after 1.734157912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:53.371175 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:53.437917 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:53.437956 1532633 retry.go:31] will retry after 2.49121489s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:53.930071 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:04:53.986900 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:53.986935 1532633 retry.go:31] will retry after 2.048688298s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:54.336885 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:04:54.847106 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:04:54.923019 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:54.923054 1532633 retry.go:31] will retry after 2.142030138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:55.930227 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:55.990258 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:55.990294 1532633 retry.go:31] will retry after 2.707811037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:56.036521 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:04:56.097317 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:56.097352 1532633 retry.go:31] will retry after 2.146665141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:56.836913 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:04:57.065333 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:04:57.147079 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:57.147117 1532633 retry.go:31] will retry after 3.792914481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:58.244261 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:04:58.304505 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:58.304538 1532633 retry.go:31] will retry after 3.360821909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:58.698362 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:04:58.754622 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:04:58.754653 1532633 retry.go:31] will retry after 5.541004931s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:04:59.336144 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:00.940480 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:05:01.003756 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:01.003802 1532633 retry.go:31] will retry after 2.96874462s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:01.336264 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:01.665917 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:05:01.728242 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:01.728275 1532633 retry.go:31] will retry after 8.916729655s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:03.336811 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:03.973522 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:05:04.037741 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:04.037776 1532633 retry.go:31] will retry after 6.210277542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:04.296383 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:05:04.360008 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:04.360045 1532633 retry.go:31] will retry after 7.195036005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:05.337054 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:07.836826 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:09.837041 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:10.248588 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:05:10.313237 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:10.313283 1532633 retry.go:31] will retry after 8.934777878s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:10.646200 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:05:10.705656 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:10.705690 1532633 retry.go:31] will retry after 12.190283501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:11.555705 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:05:11.661890 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:11.661924 1532633 retry.go:31] will retry after 5.300472002s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:12.336810 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:14.336968 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:16.337075 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:16.963159 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:05:17.023434 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:17.023464 1532633 retry.go:31] will retry after 7.246070268s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:18.836178 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:19.248832 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:05:19.312969 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:19.313003 1532633 retry.go:31] will retry after 13.568837967s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:20.836857 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:22.896385 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:05:22.954841 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:22.954869 1532633 retry.go:31] will retry after 19.284270803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:23.336898 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:24.270582 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:05:24.330461 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:24.330496 1532633 retry.go:31] will retry after 25.107997507s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:25.836832 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:27.837099 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:29.837229 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:32.337006 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:32.882520 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:05:32.944328 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:32.944368 1532633 retry.go:31] will retry after 16.148859129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:34.836937 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:37.337064 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:39.837056 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:42.239525 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:05:42.310135 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:42.310173 1532633 retry.go:31] will retry after 15.456030755s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:42.336738 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:44.336942 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:46.337118 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:48.836877 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:49.094336 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:05:49.194140 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:49.194179 1532633 retry.go:31] will retry after 37.565219756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:49.439413 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:05:49.497701 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:49.497737 1532633 retry.go:31] will retry after 28.907874152s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:51.336848 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:53.836235 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:05:55.836809 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:05:57.766432 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:05:57.827035 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:05:57.827069 1532633 retry.go:31] will retry after 21.817184299s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:05:58.336352 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:00.336702 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:02.337038 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:04.836820 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:06.836996 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:08.837192 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:11.337013 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:13.836907 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:16.336156 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:18.336864 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:06:18.406172 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:06:18.467162 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:06:18.467195 1532633 retry.go:31] will retry after 30.701956357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:06:19.645168 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:06:19.709360 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:06:19.709466 1532633 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 16:06:20.336963 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:22.337091 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:24.836902 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:06:26.760577 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:06:26.824828 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:06:26.824933 1532633 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 16:06:27.336805 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:06:27.802892 1527131 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001194483s
	I1213 16:06:27.802923 1527131 kubeadm.go:319] 
	I1213 16:06:27.803273 1527131 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 16:06:27.803399 1527131 kubeadm.go:319] 	- The kubelet is not running
	I1213 16:06:27.803765 1527131 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 16:06:27.803775 1527131 kubeadm.go:319] 
	I1213 16:06:27.803981 1527131 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 16:06:27.804042 1527131 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 16:06:27.804098 1527131 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 16:06:27.804106 1527131 kubeadm.go:319] 
	I1213 16:06:27.809079 1527131 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 16:06:27.809540 1527131 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 16:06:27.809697 1527131 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 16:06:27.810128 1527131 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 16:06:27.810147 1527131 kubeadm.go:319] 
	I1213 16:06:27.810227 1527131 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 16:06:27.810425 1527131 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-526531] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-526531] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001194483s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 16:06:27.810556 1527131 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 16:06:28.218967 1527131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 16:06:28.233104 1527131 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 16:06:28.233179 1527131 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 16:06:28.241250 1527131 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 16:06:28.241272 1527131 kubeadm.go:158] found existing configuration files:
	
	I1213 16:06:28.241325 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 16:06:28.249399 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 16:06:28.249464 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 16:06:28.257096 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 16:06:28.265010 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 16:06:28.265075 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 16:06:28.273325 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 16:06:28.281364 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 16:06:28.281443 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 16:06:28.289177 1527131 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 16:06:28.297335 1527131 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 16:06:28.297406 1527131 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 16:06:28.305336 1527131 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 16:06:28.346459 1527131 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 16:06:28.346706 1527131 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 16:06:28.412526 1527131 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 16:06:28.412656 1527131 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 16:06:28.412720 1527131 kubeadm.go:319] OS: Linux
	I1213 16:06:28.412796 1527131 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 16:06:28.412874 1527131 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 16:06:28.412953 1527131 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 16:06:28.413023 1527131 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 16:06:28.413091 1527131 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 16:06:28.413171 1527131 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 16:06:28.413247 1527131 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 16:06:28.413330 1527131 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 16:06:28.413409 1527131 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 16:06:28.487502 1527131 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 16:06:28.487768 1527131 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 16:06:28.487886 1527131 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 16:06:28.493209 1527131 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 16:06:28.498603 1527131 out.go:252]   - Generating certificates and keys ...
	I1213 16:06:28.498777 1527131 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 16:06:28.498875 1527131 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 16:06:28.498987 1527131 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 16:06:28.499079 1527131 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 16:06:28.499178 1527131 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 16:06:28.499261 1527131 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 16:06:28.499387 1527131 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 16:06:28.499489 1527131 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 16:06:28.499597 1527131 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 16:06:28.499699 1527131 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 16:06:28.499765 1527131 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 16:06:28.499849 1527131 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 16:06:28.647459 1527131 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 16:06:28.854581 1527131 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 16:06:29.198188 1527131 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 16:06:29.369603 1527131 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 16:06:29.759796 1527131 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 16:06:29.760686 1527131 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 16:06:29.763405 1527131 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 16:06:29.766742 1527131 out.go:252]   - Booting up control plane ...
	I1213 16:06:29.766921 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 16:06:29.767060 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 16:06:29.767160 1527131 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 16:06:29.788844 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 16:06:29.789113 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 16:06:29.796997 1527131 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 16:06:29.797476 1527131 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 16:06:29.797700 1527131 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 16:06:29.934060 1527131 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 16:06:29.934180 1527131 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1213 16:06:29.836878 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:32.336819 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:34.336911 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:36.836814 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:38.837068 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:41.336826 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:43.336942 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:45.836809 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:47.836978 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:06:49.169418 1532633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:06:49.229366 1532633 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:06:49.229477 1532633 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 16:06:49.233284 1532633 out.go:179] * Enabled addons: 
	I1213 16:06:49.236115 1532633 addons.go:530] duration metric: took 1m59.83066349s for enable addons: enabled=[]
	W1213 16:06:50.336853 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:52.836975 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:55.336982 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:57.836819 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:06:59.837077 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:02.336884 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:04.337044 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:06.836907 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:09.336829 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:11.836902 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:13.836966 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:16.336991 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:18.836964 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:21.336861 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:23.336994 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:25.337136 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:27.837080 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:30.336834 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:32.336947 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:34.337009 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:36.836927 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:39.336872 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:41.836269 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:43.836773 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:45.837030 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:47.837167 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:50.336908 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:52.336995 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:54.836850 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:56.837113 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:07:59.336907 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:01.836519 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:03.836935 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:05.837188 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:08.336182 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:10.336290 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:12.836188 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:14.837007 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:17.336926 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:19.337137 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:21.836823 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:23.836887 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:26.336902 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:28.836875 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:30.837155 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:33.344927 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:35.836197 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:38.336221 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:40.336266 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:42.336937 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:44.837052 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:47.336949 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:49.337721 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:51.836216 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:54.336802 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:56.337015 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:08:58.337101 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:00.340034 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:02.837190 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:05.337007 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:07.836179 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:09.836379 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:12.336811 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:14.337024 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:16.836875 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:18.836958 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:21.336809 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:23.337044 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:25.337144 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:27.837183 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:30.336838 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:32.336966 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:34.836253 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:36.837105 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:39.336929 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:41.836911 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:44.336936 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:46.336992 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:48.837015 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:51.336072 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:53.336374 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:55.836834 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:57.837117 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:09:59.837157 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:02.336184 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:04.336871 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:06.336975 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:08.836835 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:10.836923 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:12.837238 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:15.336203 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:17.337025 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:19.837094 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:22.336928 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:24.836175 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:26.836947 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:10:29.934181 1527131 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000175102s
	I1213 16:10:29.934219 1527131 kubeadm.go:319] 
	I1213 16:10:29.934278 1527131 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 16:10:29.934315 1527131 kubeadm.go:319] 	- The kubelet is not running
	I1213 16:10:29.934420 1527131 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 16:10:29.934431 1527131 kubeadm.go:319] 
	I1213 16:10:29.934571 1527131 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 16:10:29.934616 1527131 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 16:10:29.934646 1527131 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 16:10:29.934653 1527131 kubeadm.go:319] 
	I1213 16:10:29.939000 1527131 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 16:10:29.939475 1527131 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 16:10:29.939605 1527131 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 16:10:29.939919 1527131 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 16:10:29.939941 1527131 kubeadm.go:319] 
	I1213 16:10:29.940021 1527131 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 16:10:29.940103 1527131 kubeadm.go:403] duration metric: took 8m6.466581637s to StartCluster
	I1213 16:10:29.940140 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:10:29.940207 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:10:29.965453 1527131 cri.go:89] found id: ""
	I1213 16:10:29.965477 1527131 logs.go:282] 0 containers: []
	W1213 16:10:29.965487 1527131 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:10:29.965493 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:10:29.965556 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:10:29.991522 1527131 cri.go:89] found id: ""
	I1213 16:10:29.991547 1527131 logs.go:282] 0 containers: []
	W1213 16:10:29.991560 1527131 logs.go:284] No container was found matching "etcd"
	I1213 16:10:29.991566 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:10:29.991628 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:10:30.032969 1527131 cri.go:89] found id: ""
	I1213 16:10:30.032993 1527131 logs.go:282] 0 containers: []
	W1213 16:10:30.033002 1527131 logs.go:284] No container was found matching "coredns"
	I1213 16:10:30.033008 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:10:30.033087 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:10:30.086903 1527131 cri.go:89] found id: ""
	I1213 16:10:30.086929 1527131 logs.go:282] 0 containers: []
	W1213 16:10:30.086937 1527131 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:10:30.086944 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:10:30.087018 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:10:30.120054 1527131 cri.go:89] found id: ""
	I1213 16:10:30.120085 1527131 logs.go:282] 0 containers: []
	W1213 16:10:30.120097 1527131 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:10:30.120106 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:10:30.120179 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:10:30.147481 1527131 cri.go:89] found id: ""
	I1213 16:10:30.147512 1527131 logs.go:282] 0 containers: []
	W1213 16:10:30.147521 1527131 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:10:30.147528 1527131 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:10:30.147597 1527131 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:10:30.175161 1527131 cri.go:89] found id: ""
	I1213 16:10:30.175192 1527131 logs.go:282] 0 containers: []
	W1213 16:10:30.175202 1527131 logs.go:284] No container was found matching "kindnet"
	I1213 16:10:30.175212 1527131 logs.go:123] Gathering logs for kubelet ...
	I1213 16:10:30.175227 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:10:30.236323 1527131 logs.go:123] Gathering logs for dmesg ...
	I1213 16:10:30.236366 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:10:30.252852 1527131 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:10:30.252882 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:10:30.323930 1527131 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:10:30.308479    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.309175    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.310915    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.318127    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.318780    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:10:30.308479    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.309175    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.310915    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.318127    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:10:30.318780    4881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:10:30.323954 1527131 logs.go:123] Gathering logs for containerd ...
	I1213 16:10:30.323966 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:10:30.363277 1527131 logs.go:123] Gathering logs for container status ...
	I1213 16:10:30.363323 1527131 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 16:10:30.390658 1527131 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000175102s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 16:10:30.390707 1527131 out.go:285] * 
	W1213 16:10:30.390758 1527131 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000175102s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 16:10:30.390773 1527131 out.go:285] * 
	W1213 16:10:30.392934 1527131 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 16:10:30.397735 1527131 out.go:203] 
	W1213 16:10:30.401437 1527131 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000175102s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 16:10:30.401483 1527131 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 16:10:30.401510 1527131 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 16:10:30.404721 1527131 out.go:203] 
	W1213 16:10:28.837006 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:31.336932 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:33.836168 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:35.836828 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:37.837072 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:39.837209 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:42.337374 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:44.836865 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:47.336935 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 16:10:49.836264 1532633 node_ready.go:55] error getting node "no-preload-439544" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-439544": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 16:10:50.335908 1532633 node_ready.go:38] duration metric: took 6m0.000276074s for node "no-preload-439544" to be "Ready" ...
	I1213 16:10:50.339158 1532633 out.go:203] 
	W1213 16:10:50.342306 1532633 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 16:10:50.342341 1532633 out.go:285] * 
	W1213 16:10:50.344947 1532633 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 16:10:50.347878 1532633 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.763599012Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.763668968Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.763768436Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.763840582Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.763912326Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.763982060Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.764040315Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.764106184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.764177723Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.764268773Z" level=info msg="Connect containerd service"
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.764658655Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.765332239Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.781836915Z" level=info msg="Start subscribing containerd event"
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.782053583Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.782112346Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.782060179Z" level=info msg="Start recovering state"
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.821027191Z" level=info msg="Start event monitor"
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.821077496Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.821086587Z" level=info msg="Start streaming server"
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.821097803Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.821106861Z" level=info msg="runtime interface starting up..."
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.821113171Z" level=info msg="starting plugins..."
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.821124559Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 16:02:21 newest-cni-526531 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 16:02:21 newest-cni-526531 containerd[761]: time="2025-12-13T16:02:21.823117415Z" level=info msg="containerd successfully booted in 0.082954s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:12:11.685593    6004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:12:11.686358    6004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:12:11.688006    6004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:12:11.688681    6004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:12:11.690318    6004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.759368] overlayfs: idmapped layers are currently not supported
	[Dec13 13:37] overlayfs: idmapped layers are currently not supported
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 16:12:11 up  7:54,  0 user,  load average: 1.13, 0.79, 1.23
	Linux newest-cni-526531 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 16:12:08 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:12:08 newest-cni-526531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 451.
	Dec 13 16:12:08 newest-cni-526531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:12:08 newest-cni-526531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:12:08 newest-cni-526531 kubelet[5881]: E1213 16:12:08.896032    5881 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:12:08 newest-cni-526531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:12:08 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:12:09 newest-cni-526531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 452.
	Dec 13 16:12:09 newest-cni-526531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:12:09 newest-cni-526531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:12:09 newest-cni-526531 kubelet[5887]: E1213 16:12:09.640784    5887 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:12:09 newest-cni-526531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:12:09 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:12:10 newest-cni-526531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 453.
	Dec 13 16:12:10 newest-cni-526531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:12:10 newest-cni-526531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:12:10 newest-cni-526531 kubelet[5893]: E1213 16:12:10.409388    5893 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:12:10 newest-cni-526531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:12:10 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:12:11 newest-cni-526531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 454.
	Dec 13 16:12:11 newest-cni-526531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:12:11 newest-cni-526531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:12:11 newest-cni-526531 kubelet[5918]: E1213 16:12:11.180151    5918 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:12:11 newest-cni-526531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:12:11 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-526531 -n newest-cni-526531
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-526531 -n newest-cni-526531: exit status 6 (361.598791ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 16:12:12.311982 1542057 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-526531" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-526531" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (100.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 16:11:06.013944 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/default-k8s-diff-port-946932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 16:13:23.671989 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 16:13:42.552446 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 16:14:53.531962 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/old-k8s-version-912710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 16:15:18.171566 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 16:15:38.304309 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/default-k8s-diff-port-946932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 16:16:16.596989 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/old-k8s-version-912710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 16:18:25.638339 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 16:18:42.552859 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1213 16:19:28.297684 1252934 config.go:182] Loaded profile config "auto-023791": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 16:19:46.742778 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-439544 -n no-preload-439544
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-439544 -n no-preload-439544: exit status 2 (395.087771ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-439544" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-439544
helpers_test.go:244: (dbg) docker inspect no-preload-439544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d",
	        "Created": "2025-12-13T15:54:12.178460013Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1532771,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T16:04:42.677982497Z",
	            "FinishedAt": "2025-12-13T16:04:41.261584549Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/hostname",
	        "HostsPath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/hosts",
	        "LogPath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d-json.log",
	        "Name": "/no-preload-439544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-439544:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-439544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d",
	                "LowerDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-439544",
	                "Source": "/var/lib/docker/volumes/no-preload-439544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-439544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-439544",
	                "name.minikube.sigs.k8s.io": "no-preload-439544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4dced35fb175add3b26a40dff982545ee75f124f4735db30543f89845b336b1c",
	            "SandboxKey": "/var/run/docker/netns/4dced35fb175",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34228"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34229"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34232"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34230"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34231"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-439544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:74:3b:fa:0b:79",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "77a202cfc0163f00e436e53caf7341626192055eeb5da6a6f5d953ced7f7adfb",
	                    "EndpointID": "7084aedd50f3a2db715b196cf320f0078e1627ae582576065d327fcc3de1e2ca",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-439544",
	                        "53d57adb0653"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-439544 -n no-preload-439544
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-439544 -n no-preload-439544: exit status 2 (427.50607ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-439544 logs -n 25
E1213 16:19:53.531917 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/old-k8s-version-912710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:261: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                             ARGS                                                              │   PROFILE   │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p auto-023791 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:18 UTC │ 13 Dec 25 16:19 UTC │
	│ ssh     │ -p auto-023791 pgrep -a kubelet                                                                                               │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:19 UTC │ 13 Dec 25 16:19 UTC │
	│ ssh     │ -p auto-023791 sudo cat /etc/nsswitch.conf                                                                                    │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:19 UTC │ 13 Dec 25 16:19 UTC │
	│ ssh     │ -p auto-023791 sudo cat /etc/hosts                                                                                            │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:19 UTC │ 13 Dec 25 16:19 UTC │
	│ ssh     │ -p auto-023791 sudo cat /etc/resolv.conf                                                                                      │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:19 UTC │ 13 Dec 25 16:19 UTC │
	│ ssh     │ -p auto-023791 sudo crictl pods                                                                                               │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:19 UTC │ 13 Dec 25 16:19 UTC │
	│ ssh     │ -p auto-023791 sudo crictl ps --all                                                                                           │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:19 UTC │ 13 Dec 25 16:19 UTC │
	│ ssh     │ -p auto-023791 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                    │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:19 UTC │ 13 Dec 25 16:19 UTC │
	│ ssh     │ -p auto-023791 sudo ip a s                                                                                                    │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:19 UTC │ 13 Dec 25 16:19 UTC │
	│ ssh     │ -p auto-023791 sudo ip r s                                                                                                    │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:19 UTC │ 13 Dec 25 16:19 UTC │
	│ ssh     │ -p auto-023791 sudo iptables-save                                                                                             │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:19 UTC │ 13 Dec 25 16:19 UTC │
	│ ssh     │ -p auto-023791 sudo iptables -t nat -L -n -v                                                                                  │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:19 UTC │ 13 Dec 25 16:19 UTC │
	│ ssh     │ -p auto-023791 sudo systemctl status kubelet --all --full --no-pager                                                          │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:19 UTC │ 13 Dec 25 16:19 UTC │
	│ ssh     │ -p auto-023791 sudo systemctl cat kubelet --no-pager                                                                          │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:19 UTC │ 13 Dec 25 16:19 UTC │
	│ ssh     │ -p auto-023791 sudo journalctl -xeu kubelet --all --full --no-pager                                                           │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:19 UTC │ 13 Dec 25 16:19 UTC │
	│ ssh     │ -p auto-023791 sudo cat /etc/kubernetes/kubelet.conf                                                                          │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:19 UTC │ 13 Dec 25 16:19 UTC │
	│ ssh     │ -p auto-023791 sudo cat /var/lib/kubelet/config.yaml                                                                          │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:19 UTC │ 13 Dec 25 16:19 UTC │
	│ ssh     │ -p auto-023791 sudo systemctl status docker --all --full --no-pager                                                           │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:19 UTC │                     │
	│ ssh     │ -p auto-023791 sudo systemctl cat docker --no-pager                                                                           │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:19 UTC │ 13 Dec 25 16:19 UTC │
	│ ssh     │ -p auto-023791 sudo cat /etc/docker/daemon.json                                                                               │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:19 UTC │                     │
	│ ssh     │ -p auto-023791 sudo docker system info                                                                                        │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:19 UTC │                     │
	│ ssh     │ -p auto-023791 sudo systemctl status cri-docker --all --full --no-pager                                                       │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:19 UTC │                     │
	│ ssh     │ -p auto-023791 sudo systemctl cat cri-docker --no-pager                                                                       │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:19 UTC │ 13 Dec 25 16:19 UTC │
	│ ssh     │ -p auto-023791 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                  │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:19 UTC │                     │
	│ ssh     │ -p auto-023791 sudo cat /usr/lib/systemd/system/cri-docker.service                                                            │ auto-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:19 UTC │ 13 Dec 25 16:19 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 16:18:39
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 16:18:39.486684 1560718 out.go:360] Setting OutFile to fd 1 ...
	I1213 16:18:39.486804 1560718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:18:39.486815 1560718 out.go:374] Setting ErrFile to fd 2...
	I1213 16:18:39.486821 1560718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:18:39.487102 1560718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 16:18:39.487611 1560718 out.go:368] Setting JSON to false
	I1213 16:18:39.488495 1560718 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":28868,"bootTime":1765613851,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 16:18:39.488566 1560718 start.go:143] virtualization:  
	I1213 16:18:39.492347 1560718 out.go:179] * [auto-023791] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 16:18:39.496425 1560718 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 16:18:39.496547 1560718 notify.go:221] Checking for updates...
	I1213 16:18:39.502465 1560718 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 16:18:39.505529 1560718 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:18:39.508580 1560718 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 16:18:39.511686 1560718 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 16:18:39.514873 1560718 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 16:18:39.518555 1560718 config.go:182] Loaded profile config "no-preload-439544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:18:39.518716 1560718 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 16:18:39.544803 1560718 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 16:18:39.544950 1560718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:18:39.610805 1560718 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:18:39.601505873 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:18:39.610914 1560718 docker.go:319] overlay module found
	I1213 16:18:39.614053 1560718 out.go:179] * Using the docker driver based on user configuration
	I1213 16:18:39.616917 1560718 start.go:309] selected driver: docker
	I1213 16:18:39.616933 1560718 start.go:927] validating driver "docker" against <nil>
	I1213 16:18:39.616947 1560718 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 16:18:39.617636 1560718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:18:39.691432 1560718 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:18:39.682498343 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:18:39.691587 1560718 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 16:18:39.691814 1560718 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 16:18:39.694863 1560718 out.go:179] * Using Docker driver with root privileges
	I1213 16:18:39.697996 1560718 cni.go:84] Creating CNI manager for ""
	I1213 16:18:39.698079 1560718 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:18:39.698093 1560718 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 16:18:39.698196 1560718 start.go:353] cluster config:
	{Name:auto-023791 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-023791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:con
tainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:18:39.701396 1560718 out.go:179] * Starting "auto-023791" primary control-plane node in "auto-023791" cluster
	I1213 16:18:39.704386 1560718 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 16:18:39.707366 1560718 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 16:18:39.710342 1560718 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 16:18:39.710401 1560718 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1213 16:18:39.710411 1560718 cache.go:65] Caching tarball of preloaded images
	I1213 16:18:39.710502 1560718 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 16:18:39.710512 1560718 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1213 16:18:39.710561 1560718 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 16:18:39.710634 1560718 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/config.json ...
	I1213 16:18:39.710651 1560718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/config.json: {Name:mk2b9217eae34f10a42357c3f7a9fb10e07b52a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:18:39.731394 1560718 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 16:18:39.731418 1560718 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 16:18:39.731438 1560718 cache.go:243] Successfully downloaded all kic artifacts
	I1213 16:18:39.731470 1560718 start.go:360] acquireMachinesLock for auto-023791: {Name:mk832fb8adc52b789671dd66f9d6dfb93b6f07a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:18:39.731599 1560718 start.go:364] duration metric: took 107.993µs to acquireMachinesLock for "auto-023791"
	I1213 16:18:39.731629 1560718 start.go:93] Provisioning new machine with config: &{Name:auto-023791 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-023791 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 16:18:39.731704 1560718 start.go:125] createHost starting for "" (driver="docker")
	I1213 16:18:39.735179 1560718 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 16:18:39.735486 1560718 start.go:159] libmachine.API.Create for "auto-023791" (driver="docker")
	I1213 16:18:39.735532 1560718 client.go:173] LocalClient.Create starting
	I1213 16:18:39.735622 1560718 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem
	I1213 16:18:39.735665 1560718 main.go:143] libmachine: Decoding PEM data...
	I1213 16:18:39.735685 1560718 main.go:143] libmachine: Parsing certificate...
	I1213 16:18:39.735735 1560718 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem
	I1213 16:18:39.735760 1560718 main.go:143] libmachine: Decoding PEM data...
	I1213 16:18:39.735774 1560718 main.go:143] libmachine: Parsing certificate...
	I1213 16:18:39.736140 1560718 cli_runner.go:164] Run: docker network inspect auto-023791 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 16:18:39.752729 1560718 cli_runner.go:211] docker network inspect auto-023791 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 16:18:39.752823 1560718 network_create.go:284] running [docker network inspect auto-023791] to gather additional debugging logs...
	I1213 16:18:39.752844 1560718 cli_runner.go:164] Run: docker network inspect auto-023791
	W1213 16:18:39.768954 1560718 cli_runner.go:211] docker network inspect auto-023791 returned with exit code 1
	I1213 16:18:39.768981 1560718 network_create.go:287] error running [docker network inspect auto-023791]: docker network inspect auto-023791: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-023791 not found
	I1213 16:18:39.768996 1560718 network_create.go:289] output of [docker network inspect auto-023791]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-023791 not found
	
	** /stderr **
	I1213 16:18:39.769103 1560718 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 16:18:39.785571 1560718 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2ccf9a9eb2c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:9e:64:ed:f4:f2} reservation:<nil>}
	I1213 16:18:39.785849 1560718 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f2add06a95dc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:3f:19:f0:2f:b1} reservation:<nil>}
	I1213 16:18:39.786111 1560718 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8517ffe6861d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:d6:d4:51:8b:3d} reservation:<nil>}
	I1213 16:18:39.786547 1560718 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c3630}
	I1213 16:18:39.786573 1560718 network_create.go:124] attempt to create docker network auto-023791 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 16:18:39.786637 1560718 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-023791 auto-023791
	I1213 16:18:39.845730 1560718 network_create.go:108] docker network auto-023791 192.168.76.0/24 created
	I1213 16:18:39.845768 1560718 kic.go:121] calculated static IP "192.168.76.2" for the "auto-023791" container
	I1213 16:18:39.845842 1560718 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 16:18:39.861800 1560718 cli_runner.go:164] Run: docker volume create auto-023791 --label name.minikube.sigs.k8s.io=auto-023791 --label created_by.minikube.sigs.k8s.io=true
	I1213 16:18:39.879988 1560718 oci.go:103] Successfully created a docker volume auto-023791
	I1213 16:18:39.880075 1560718 cli_runner.go:164] Run: docker run --rm --name auto-023791-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-023791 --entrypoint /usr/bin/test -v auto-023791:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 16:18:40.406174 1560718 oci.go:107] Successfully prepared a docker volume auto-023791
	I1213 16:18:40.406246 1560718 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 16:18:40.406259 1560718 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 16:18:40.406342 1560718 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v auto-023791:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 16:18:44.433783 1560718 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v auto-023791:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.027399544s)
	I1213 16:18:44.433819 1560718 kic.go:203] duration metric: took 4.027556316s to extract preloaded images to volume ...
	W1213 16:18:44.433956 1560718 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 16:18:44.434068 1560718 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 16:18:44.487569 1560718 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-023791 --name auto-023791 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-023791 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-023791 --network auto-023791 --ip 192.168.76.2 --volume auto-023791:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 16:18:44.791860 1560718 cli_runner.go:164] Run: docker container inspect auto-023791 --format={{.State.Running}}
	I1213 16:18:44.813410 1560718 cli_runner.go:164] Run: docker container inspect auto-023791 --format={{.State.Status}}
	I1213 16:18:44.837955 1560718 cli_runner.go:164] Run: docker exec auto-023791 stat /var/lib/dpkg/alternatives/iptables
	I1213 16:18:44.900038 1560718 oci.go:144] the created container "auto-023791" has a running status.
	I1213 16:18:44.900070 1560718 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/auto-023791/id_rsa...
	I1213 16:18:45.223282 1560718 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/auto-023791/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 16:18:45.266099 1560718 cli_runner.go:164] Run: docker container inspect auto-023791 --format={{.State.Status}}
	I1213 16:18:45.323628 1560718 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 16:18:45.323649 1560718 kic_runner.go:114] Args: [docker exec --privileged auto-023791 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 16:18:45.396668 1560718 cli_runner.go:164] Run: docker container inspect auto-023791 --format={{.State.Status}}
	I1213 16:18:45.424983 1560718 machine.go:94] provisionDockerMachine start ...
	I1213 16:18:45.425077 1560718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-023791
	I1213 16:18:45.447914 1560718 main.go:143] libmachine: Using SSH client type: native
	I1213 16:18:45.448244 1560718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34238 <nil> <nil>}
	I1213 16:18:45.448254 1560718 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 16:18:45.448828 1560718 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35484->127.0.0.1:34238: read: connection reset by peer
	I1213 16:18:48.598873 1560718 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-023791
	
	I1213 16:18:48.598900 1560718 ubuntu.go:182] provisioning hostname "auto-023791"
	I1213 16:18:48.598971 1560718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-023791
	I1213 16:18:48.616749 1560718 main.go:143] libmachine: Using SSH client type: native
	I1213 16:18:48.617084 1560718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34238 <nil> <nil>}
	I1213 16:18:48.617103 1560718 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-023791 && echo "auto-023791" | sudo tee /etc/hostname
	I1213 16:18:48.776929 1560718 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-023791
	
	I1213 16:18:48.777057 1560718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-023791
	I1213 16:18:48.795046 1560718 main.go:143] libmachine: Using SSH client type: native
	I1213 16:18:48.795409 1560718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34238 <nil> <nil>}
	I1213 16:18:48.795433 1560718 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-023791' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-023791/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-023791' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 16:18:48.955593 1560718 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 16:18:48.955665 1560718 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 16:18:48.955709 1560718 ubuntu.go:190] setting up certificates
	I1213 16:18:48.955758 1560718 provision.go:84] configureAuth start
	I1213 16:18:48.955873 1560718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-023791
	I1213 16:18:48.972558 1560718 provision.go:143] copyHostCerts
	I1213 16:18:48.972639 1560718 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 16:18:48.972655 1560718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 16:18:48.972732 1560718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 16:18:48.972834 1560718 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 16:18:48.972846 1560718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 16:18:48.972874 1560718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 16:18:48.972949 1560718 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 16:18:48.972959 1560718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 16:18:48.972985 1560718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 16:18:48.973043 1560718 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.auto-023791 san=[127.0.0.1 192.168.76.2 auto-023791 localhost minikube]
	I1213 16:18:49.156535 1560718 provision.go:177] copyRemoteCerts
	I1213 16:18:49.156608 1560718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 16:18:49.156654 1560718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-023791
	I1213 16:18:49.173215 1560718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34238 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/auto-023791/id_rsa Username:docker}
	I1213 16:18:49.278996 1560718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1213 16:18:49.296481 1560718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 16:18:49.313937 1560718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 16:18:49.331939 1560718 provision.go:87] duration metric: took 376.148893ms to configureAuth
	I1213 16:18:49.332011 1560718 ubuntu.go:206] setting minikube options for container-runtime
	I1213 16:18:49.332221 1560718 config.go:182] Loaded profile config "auto-023791": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 16:18:49.332235 1560718 machine.go:97] duration metric: took 3.907234606s to provisionDockerMachine
	I1213 16:18:49.332244 1560718 client.go:176] duration metric: took 9.596706412s to LocalClient.Create
	I1213 16:18:49.332264 1560718 start.go:167] duration metric: took 9.596779838s to libmachine.API.Create "auto-023791"
	I1213 16:18:49.332271 1560718 start.go:293] postStartSetup for "auto-023791" (driver="docker")
	I1213 16:18:49.332281 1560718 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 16:18:49.332333 1560718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 16:18:49.332373 1560718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-023791
	I1213 16:18:49.354454 1560718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34238 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/auto-023791/id_rsa Username:docker}
	I1213 16:18:49.459217 1560718 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 16:18:49.462454 1560718 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 16:18:49.462484 1560718 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 16:18:49.462496 1560718 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 16:18:49.462553 1560718 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 16:18:49.462633 1560718 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 16:18:49.462737 1560718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 16:18:49.470054 1560718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:18:49.487378 1560718 start.go:296] duration metric: took 155.022455ms for postStartSetup
	I1213 16:18:49.487782 1560718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-023791
	I1213 16:18:49.505748 1560718 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/config.json ...
	I1213 16:18:49.506046 1560718 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 16:18:49.506096 1560718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-023791
	I1213 16:18:49.524702 1560718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34238 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/auto-023791/id_rsa Username:docker}
	I1213 16:18:49.633978 1560718 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 16:18:49.639076 1560718 start.go:128] duration metric: took 9.907355927s to createHost
	I1213 16:18:49.639100 1560718 start.go:83] releasing machines lock for "auto-023791", held for 9.907488412s
	I1213 16:18:49.639174 1560718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-023791
	I1213 16:18:49.656427 1560718 ssh_runner.go:195] Run: cat /version.json
	I1213 16:18:49.656479 1560718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-023791
	I1213 16:18:49.656486 1560718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 16:18:49.656548 1560718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-023791
	I1213 16:18:49.677584 1560718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34238 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/auto-023791/id_rsa Username:docker}
	I1213 16:18:49.686241 1560718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34238 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/auto-023791/id_rsa Username:docker}
	I1213 16:18:49.783007 1560718 ssh_runner.go:195] Run: systemctl --version
	I1213 16:18:49.874529 1560718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 16:18:49.879040 1560718 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 16:18:49.879120 1560718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 16:18:49.905785 1560718 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 16:18:49.905808 1560718 start.go:496] detecting cgroup driver to use...
	I1213 16:18:49.905842 1560718 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 16:18:49.905894 1560718 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 16:18:49.921220 1560718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 16:18:49.934907 1560718 docker.go:218] disabling cri-docker service (if available) ...
	I1213 16:18:49.934973 1560718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 16:18:49.955027 1560718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 16:18:49.979347 1560718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 16:18:50.106698 1560718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 16:18:50.228043 1560718 docker.go:234] disabling docker service ...
	I1213 16:18:50.228111 1560718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 16:18:50.249577 1560718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 16:18:50.262760 1560718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 16:18:50.377589 1560718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 16:18:50.499576 1560718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 16:18:50.512777 1560718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 16:18:50.527593 1560718 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 16:18:50.536423 1560718 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 16:18:50.545743 1560718 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 16:18:50.545876 1560718 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 16:18:50.554797 1560718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:18:50.563253 1560718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 16:18:50.572081 1560718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:18:50.581495 1560718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 16:18:50.589462 1560718 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 16:18:50.598230 1560718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 16:18:50.607184 1560718 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 16:18:50.616231 1560718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 16:18:50.624043 1560718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 16:18:50.631806 1560718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:18:50.746595 1560718 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 16:18:50.882774 1560718 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 16:18:50.882875 1560718 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 16:18:50.886792 1560718 start.go:564] Will wait 60s for crictl version
	I1213 16:18:50.886888 1560718 ssh_runner.go:195] Run: which crictl
	I1213 16:18:50.890331 1560718 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 16:18:50.913610 1560718 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 16:18:50.913710 1560718 ssh_runner.go:195] Run: containerd --version
	I1213 16:18:50.934214 1560718 ssh_runner.go:195] Run: containerd --version
	I1213 16:18:50.958804 1560718 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1213 16:18:50.961799 1560718 cli_runner.go:164] Run: docker network inspect auto-023791 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 16:18:50.976942 1560718 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 16:18:50.980815 1560718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:18:50.990277 1560718 kubeadm.go:884] updating cluster {Name:auto-023791 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-023791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 16:18:50.990401 1560718 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 16:18:50.990471 1560718 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:18:51.017054 1560718 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:18:51.017079 1560718 containerd.go:534] Images already preloaded, skipping extraction
	I1213 16:18:51.017145 1560718 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:18:51.041382 1560718 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:18:51.041403 1560718 cache_images.go:86] Images are preloaded, skipping loading
	I1213 16:18:51.041411 1560718 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 containerd true true} ...
	I1213 16:18:51.041520 1560718 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-023791 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:auto-023791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 16:18:51.041588 1560718 ssh_runner.go:195] Run: sudo crictl info
	I1213 16:18:51.069875 1560718 cni.go:84] Creating CNI manager for ""
	I1213 16:18:51.069901 1560718 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:18:51.069924 1560718 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 16:18:51.069948 1560718 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-023791 NodeName:auto-023791 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 16:18:51.070101 1560718 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "auto-023791"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 16:18:51.070178 1560718 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 16:18:51.078294 1560718 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 16:18:51.078367 1560718 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 16:18:51.087343 1560718 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1213 16:18:51.102951 1560718 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 16:18:51.118326 1560718 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1213 16:18:51.132378 1560718 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 16:18:51.136643 1560718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:18:51.147223 1560718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:18:51.262306 1560718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:18:51.279496 1560718 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791 for IP: 192.168.76.2
	I1213 16:18:51.279576 1560718 certs.go:195] generating shared ca certs ...
	I1213 16:18:51.279608 1560718 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:18:51.279789 1560718 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 16:18:51.279869 1560718 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 16:18:51.279903 1560718 certs.go:257] generating profile certs ...
	I1213 16:18:51.279982 1560718 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/client.key
	I1213 16:18:51.280019 1560718 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/client.crt with IP's: []
	I1213 16:18:51.794308 1560718 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/client.crt ...
	I1213 16:18:51.794341 1560718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/client.crt: {Name:mk2bbae41300749b5f275abae39b422c1d9e28e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:18:51.794564 1560718 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/client.key ...
	I1213 16:18:51.794578 1560718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/client.key: {Name:mk08337f1cf2c3537ee0877c08196952a4058ed5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:18:51.794703 1560718 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/apiserver.key.af60f45a
	I1213 16:18:51.794720 1560718 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/apiserver.crt.af60f45a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1213 16:18:52.101047 1560718 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/apiserver.crt.af60f45a ...
	I1213 16:18:52.101083 1560718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/apiserver.crt.af60f45a: {Name:mk53f72648349cb88d0cafda851e17b8a7e7e601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:18:52.101299 1560718 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/apiserver.key.af60f45a ...
	I1213 16:18:52.101318 1560718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/apiserver.key.af60f45a: {Name:mk47ab894d51d1f5989a15870c46c279639566b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:18:52.101417 1560718 certs.go:382] copying /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/apiserver.crt.af60f45a -> /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/apiserver.crt
	I1213 16:18:52.101497 1560718 certs.go:386] copying /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/apiserver.key.af60f45a -> /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/apiserver.key
	I1213 16:18:52.101561 1560718 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/proxy-client.key
	I1213 16:18:52.101579 1560718 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/proxy-client.crt with IP's: []
	I1213 16:18:52.226730 1560718 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/proxy-client.crt ...
	I1213 16:18:52.226763 1560718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/proxy-client.crt: {Name:mk51e3c5712e61ca1dfe10e13591d05a214b0ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:18:52.226957 1560718 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/proxy-client.key ...
	I1213 16:18:52.226970 1560718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/proxy-client.key: {Name:mkb54681c68c642b2db296ff0bc1f2cfdf655547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:18:52.227175 1560718 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 16:18:52.227222 1560718 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 16:18:52.227235 1560718 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 16:18:52.227262 1560718 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 16:18:52.227290 1560718 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 16:18:52.227331 1560718 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 16:18:52.227382 1560718 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:18:52.227978 1560718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 16:18:52.246399 1560718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 16:18:52.264884 1560718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 16:18:52.282925 1560718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 16:18:52.301476 1560718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1213 16:18:52.319693 1560718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 16:18:52.338069 1560718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 16:18:52.356398 1560718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 16:18:52.374015 1560718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 16:18:52.394294 1560718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 16:18:52.412046 1560718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 16:18:52.429552 1560718 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 16:18:52.442343 1560718 ssh_runner.go:195] Run: openssl version
	I1213 16:18:52.448589 1560718 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 16:18:52.456337 1560718 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 16:18:52.463703 1560718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 16:18:52.467359 1560718 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 16:18:52.467426 1560718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 16:18:52.508679 1560718 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 16:18:52.516348 1560718 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1252934.pem /etc/ssl/certs/51391683.0
	I1213 16:18:52.524033 1560718 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 16:18:52.531566 1560718 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 16:18:52.540097 1560718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 16:18:52.544047 1560718 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 16:18:52.544141 1560718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 16:18:52.585582 1560718 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 16:18:52.593302 1560718 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12529342.pem /etc/ssl/certs/3ec20f2e.0
	I1213 16:18:52.600764 1560718 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:18:52.608493 1560718 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 16:18:52.615891 1560718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:18:52.620201 1560718 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:18:52.620293 1560718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:18:52.661844 1560718 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 16:18:52.669355 1560718 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 16:18:52.676859 1560718 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 16:18:52.680720 1560718 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 16:18:52.680820 1560718 kubeadm.go:401] StartCluster: {Name:auto-023791 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-023791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:18:52.680933 1560718 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 16:18:52.681024 1560718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 16:18:52.716264 1560718 cri.go:89] found id: ""
	I1213 16:18:52.716382 1560718 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 16:18:52.724089 1560718 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 16:18:52.731957 1560718 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 16:18:52.732084 1560718 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 16:18:52.740105 1560718 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 16:18:52.740124 1560718 kubeadm.go:158] found existing configuration files:
	
	I1213 16:18:52.740204 1560718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 16:18:52.747812 1560718 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 16:18:52.747927 1560718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 16:18:52.755120 1560718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 16:18:52.762968 1560718 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 16:18:52.763055 1560718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 16:18:52.770388 1560718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 16:18:52.778160 1560718 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 16:18:52.778229 1560718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 16:18:52.785769 1560718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 16:18:52.793541 1560718 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 16:18:52.793655 1560718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 16:18:52.801400 1560718 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 16:18:52.847034 1560718 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 16:18:52.847142 1560718 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 16:18:52.876783 1560718 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 16:18:52.876899 1560718 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 16:18:52.876973 1560718 kubeadm.go:319] OS: Linux
	I1213 16:18:52.877044 1560718 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 16:18:52.877108 1560718 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 16:18:52.877179 1560718 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 16:18:52.877252 1560718 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 16:18:52.877326 1560718 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 16:18:52.877396 1560718 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 16:18:52.877457 1560718 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 16:18:52.877529 1560718 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 16:18:52.877600 1560718 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 16:18:52.954922 1560718 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 16:18:52.955075 1560718 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 16:18:52.955175 1560718 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 16:18:52.960573 1560718 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 16:18:52.967286 1560718 out.go:252]   - Generating certificates and keys ...
	I1213 16:18:52.967491 1560718 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 16:18:52.967570 1560718 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 16:18:53.265540 1560718 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 16:18:54.049710 1560718 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 16:18:54.540578 1560718 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 16:18:54.874128 1560718 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 16:18:54.953830 1560718 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 16:18:54.954196 1560718 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-023791 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 16:18:55.946188 1560718 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 16:18:55.946527 1560718 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-023791 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 16:18:56.269133 1560718 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 16:18:56.895878 1560718 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 16:18:57.177395 1560718 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 16:18:57.177643 1560718 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 16:18:57.537809 1560718 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 16:18:58.676086 1560718 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 16:18:58.945074 1560718 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 16:18:59.212770 1560718 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 16:18:59.431900 1560718 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 16:18:59.432665 1560718 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 16:18:59.435585 1560718 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 16:18:59.439346 1560718 out.go:252]   - Booting up control plane ...
	I1213 16:18:59.439446 1560718 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 16:18:59.439526 1560718 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 16:18:59.440728 1560718 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 16:18:59.457769 1560718 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 16:18:59.458296 1560718 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 16:18:59.466522 1560718 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 16:18:59.466911 1560718 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 16:18:59.467178 1560718 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 16:18:59.616006 1560718 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 16:18:59.616128 1560718 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 16:19:00.151889 1560718 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 507.610969ms
	I1213 16:19:00.152005 1560718 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 16:19:00.152089 1560718 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1213 16:19:00.152180 1560718 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 16:19:00.152258 1560718 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 16:19:05.563922 1560718 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.439854147s
	I1213 16:19:06.693747 1560718 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.568960048s
	I1213 16:19:07.127855 1560718 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.003363759s
	I1213 16:19:07.168662 1560718 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 16:19:07.181516 1560718 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 16:19:07.201330 1560718 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 16:19:07.201601 1560718 kubeadm.go:319] [mark-control-plane] Marking the node auto-023791 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 16:19:07.215430 1560718 kubeadm.go:319] [bootstrap-token] Using token: n8rs13.yoq0aaupe6p3xr1h
	I1213 16:19:07.218408 1560718 out.go:252]   - Configuring RBAC rules ...
	I1213 16:19:07.218539 1560718 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 16:19:07.223254 1560718 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 16:19:07.231756 1560718 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 16:19:07.238440 1560718 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 16:19:07.242956 1560718 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 16:19:07.247383 1560718 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 16:19:07.535342 1560718 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 16:19:07.968172 1560718 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 16:19:08.534883 1560718 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 16:19:08.535997 1560718 kubeadm.go:319] 
	I1213 16:19:08.536067 1560718 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 16:19:08.536072 1560718 kubeadm.go:319] 
	I1213 16:19:08.536149 1560718 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 16:19:08.536159 1560718 kubeadm.go:319] 
	I1213 16:19:08.536184 1560718 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 16:19:08.536243 1560718 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 16:19:08.536293 1560718 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 16:19:08.536297 1560718 kubeadm.go:319] 
	I1213 16:19:08.536351 1560718 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 16:19:08.536355 1560718 kubeadm.go:319] 
	I1213 16:19:08.536403 1560718 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 16:19:08.536407 1560718 kubeadm.go:319] 
	I1213 16:19:08.536459 1560718 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 16:19:08.536535 1560718 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 16:19:08.536603 1560718 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 16:19:08.536607 1560718 kubeadm.go:319] 
	I1213 16:19:08.536691 1560718 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 16:19:08.536769 1560718 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 16:19:08.536773 1560718 kubeadm.go:319] 
	I1213 16:19:08.536856 1560718 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token n8rs13.yoq0aaupe6p3xr1h \
	I1213 16:19:08.536967 1560718 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:590ff7d5a34ba2f13bc4446ba280674514ec0440f2cd73335e75879dbf7fc61d \
	I1213 16:19:08.536989 1560718 kubeadm.go:319] 	--control-plane 
	I1213 16:19:08.536992 1560718 kubeadm.go:319] 
	I1213 16:19:08.537077 1560718 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 16:19:08.537081 1560718 kubeadm.go:319] 
	I1213 16:19:08.537389 1560718 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token n8rs13.yoq0aaupe6p3xr1h \
	I1213 16:19:08.537505 1560718 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:590ff7d5a34ba2f13bc4446ba280674514ec0440f2cd73335e75879dbf7fc61d 
	I1213 16:19:08.542155 1560718 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 16:19:08.542389 1560718 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 16:19:08.542499 1560718 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 16:19:08.542519 1560718 cni.go:84] Creating CNI manager for ""
	I1213 16:19:08.542527 1560718 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:19:08.545730 1560718 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1213 16:19:08.548718 1560718 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 16:19:08.552945 1560718 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 16:19:08.552969 1560718 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1213 16:19:08.567600 1560718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 16:19:08.878827 1560718 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 16:19:08.878969 1560718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 16:19:08.879065 1560718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-023791 minikube.k8s.io/updated_at=2025_12_13T16_19_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7 minikube.k8s.io/name=auto-023791 minikube.k8s.io/primary=true
	I1213 16:19:08.889527 1560718 ops.go:34] apiserver oom_adj: -16
	I1213 16:19:09.021726 1560718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 16:19:09.522708 1560718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 16:19:10.022475 1560718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 16:19:10.522165 1560718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 16:19:11.022560 1560718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 16:19:11.522554 1560718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 16:19:12.022171 1560718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 16:19:12.522369 1560718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 16:19:13.021844 1560718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 16:19:13.140138 1560718 kubeadm.go:1114] duration metric: took 4.261224859s to wait for elevateKubeSystemPrivileges
	I1213 16:19:13.140175 1560718 kubeadm.go:403] duration metric: took 20.459361361s to StartCluster
	I1213 16:19:13.140198 1560718 settings.go:142] acquiring lock: {Name:mk3e38eb9635dd950ddf8081cc0598a8c10049c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:19:13.140282 1560718 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:19:13.141432 1560718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:19:13.141728 1560718 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 16:19:13.141901 1560718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 16:19:13.142208 1560718 config.go:182] Loaded profile config "auto-023791": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 16:19:13.142264 1560718 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 16:19:13.142340 1560718 addons.go:70] Setting storage-provisioner=true in profile "auto-023791"
	I1213 16:19:13.142365 1560718 addons.go:239] Setting addon storage-provisioner=true in "auto-023791"
	I1213 16:19:13.142408 1560718 host.go:66] Checking if "auto-023791" exists ...
	I1213 16:19:13.143035 1560718 cli_runner.go:164] Run: docker container inspect auto-023791 --format={{.State.Status}}
	I1213 16:19:13.144981 1560718 addons.go:70] Setting default-storageclass=true in profile "auto-023791"
	I1213 16:19:13.145014 1560718 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-023791"
	I1213 16:19:13.145334 1560718 cli_runner.go:164] Run: docker container inspect auto-023791 --format={{.State.Status}}
	I1213 16:19:13.147605 1560718 out.go:179] * Verifying Kubernetes components...
	I1213 16:19:13.152308 1560718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:19:13.189211 1560718 addons.go:239] Setting addon default-storageclass=true in "auto-023791"
	I1213 16:19:13.189252 1560718 host.go:66] Checking if "auto-023791" exists ...
	I1213 16:19:13.189753 1560718 cli_runner.go:164] Run: docker container inspect auto-023791 --format={{.State.Status}}
	I1213 16:19:13.192021 1560718 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 16:19:13.194887 1560718 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:19:13.194914 1560718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 16:19:13.194990 1560718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-023791
	I1213 16:19:13.227093 1560718 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 16:19:13.227115 1560718 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 16:19:13.227179 1560718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-023791
	I1213 16:19:13.229245 1560718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34238 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/auto-023791/id_rsa Username:docker}
	I1213 16:19:13.257684 1560718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34238 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/auto-023791/id_rsa Username:docker}
	I1213 16:19:13.368528 1560718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:19:13.604879 1560718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 16:19:13.605075 1560718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:19:13.611721 1560718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 16:19:14.682346 1560718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.313719866s)
	I1213 16:19:14.682402 1560718 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.07729965s)
	I1213 16:19:14.682413 1560718 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1213 16:19:14.682626 1560718 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.077400767s)
	I1213 16:19:14.683590 1560718 node_ready.go:35] waiting up to 15m0s for node "auto-023791" to be "Ready" ...
	I1213 16:19:14.683855 1560718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.072036283s)
	I1213 16:19:14.722741 1560718 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1213 16:19:14.725642 1560718 addons.go:530] duration metric: took 1.58336685s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1213 16:19:15.193627 1560718 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-023791" context rescaled to 1 replicas
	W1213 16:19:16.686680 1560718 node_ready.go:57] node "auto-023791" has "Ready":"False" status (will retry)
	W1213 16:19:18.687271 1560718 node_ready.go:57] node "auto-023791" has "Ready":"False" status (will retry)
	W1213 16:19:20.687522 1560718 node_ready.go:57] node "auto-023791" has "Ready":"False" status (will retry)
	W1213 16:19:23.186850 1560718 node_ready.go:57] node "auto-023791" has "Ready":"False" status (will retry)
	I1213 16:19:25.187215 1560718 node_ready.go:49] node "auto-023791" is "Ready"
	I1213 16:19:25.187251 1560718 node_ready.go:38] duration metric: took 10.503598342s for node "auto-023791" to be "Ready" ...
	I1213 16:19:25.187266 1560718 api_server.go:52] waiting for apiserver process to appear ...
	I1213 16:19:25.187346 1560718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:19:25.199846 1560718 api_server.go:72] duration metric: took 12.058087767s to wait for apiserver process to appear ...
	I1213 16:19:25.199872 1560718 api_server.go:88] waiting for apiserver healthz status ...
	I1213 16:19:25.199905 1560718 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 16:19:25.208385 1560718 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1213 16:19:25.209579 1560718 api_server.go:141] control plane version: v1.34.2
	I1213 16:19:25.209607 1560718 api_server.go:131] duration metric: took 9.728998ms to wait for apiserver health ...
	I1213 16:19:25.209616 1560718 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 16:19:25.213520 1560718 system_pods.go:59] 8 kube-system pods found
	I1213 16:19:25.213558 1560718 system_pods.go:61] "coredns-66bc5c9577-nf9mj" [a4b10ed1-885b-4d7f-88ef-aa799c2e5c69] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 16:19:25.213565 1560718 system_pods.go:61] "etcd-auto-023791" [00141623-af17-4430-9329-d659e73aa887] Running
	I1213 16:19:25.213574 1560718 system_pods.go:61] "kindnet-wrffr" [204b3c2e-045a-495f-bff3-4862269ab7c3] Running
	I1213 16:19:25.213578 1560718 system_pods.go:61] "kube-apiserver-auto-023791" [1fa0514b-8ab8-4302-bf87-f1c2782b1518] Running
	I1213 16:19:25.213582 1560718 system_pods.go:61] "kube-controller-manager-auto-023791" [681c9d46-a1f1-4cdb-9119-7de6fd6c3f63] Running
	I1213 16:19:25.213586 1560718 system_pods.go:61] "kube-proxy-hnqhc" [0e2a07fb-feb9-4a3d-8f67-b13b6443e727] Running
	I1213 16:19:25.213590 1560718 system_pods.go:61] "kube-scheduler-auto-023791" [f5057329-bdc2-45f9-980e-2fa903e8472f] Running
	I1213 16:19:25.213596 1560718 system_pods.go:61] "storage-provisioner" [6f23232c-e237-4fee-96b9-3f7160b22153] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 16:19:25.213602 1560718 system_pods.go:74] duration metric: took 3.980646ms to wait for pod list to return data ...
	I1213 16:19:25.213613 1560718 default_sa.go:34] waiting for default service account to be created ...
	I1213 16:19:25.216533 1560718 default_sa.go:45] found service account: "default"
	I1213 16:19:25.216595 1560718 default_sa.go:55] duration metric: took 2.975188ms for default service account to be created ...
	I1213 16:19:25.216615 1560718 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 16:19:25.220340 1560718 system_pods.go:86] 8 kube-system pods found
	I1213 16:19:25.220380 1560718 system_pods.go:89] "coredns-66bc5c9577-nf9mj" [a4b10ed1-885b-4d7f-88ef-aa799c2e5c69] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 16:19:25.220387 1560718 system_pods.go:89] "etcd-auto-023791" [00141623-af17-4430-9329-d659e73aa887] Running
	I1213 16:19:25.220425 1560718 system_pods.go:89] "kindnet-wrffr" [204b3c2e-045a-495f-bff3-4862269ab7c3] Running
	I1213 16:19:25.220430 1560718 system_pods.go:89] "kube-apiserver-auto-023791" [1fa0514b-8ab8-4302-bf87-f1c2782b1518] Running
	I1213 16:19:25.220440 1560718 system_pods.go:89] "kube-controller-manager-auto-023791" [681c9d46-a1f1-4cdb-9119-7de6fd6c3f63] Running
	I1213 16:19:25.220454 1560718 system_pods.go:89] "kube-proxy-hnqhc" [0e2a07fb-feb9-4a3d-8f67-b13b6443e727] Running
	I1213 16:19:25.220458 1560718 system_pods.go:89] "kube-scheduler-auto-023791" [f5057329-bdc2-45f9-980e-2fa903e8472f] Running
	I1213 16:19:25.220464 1560718 system_pods.go:89] "storage-provisioner" [6f23232c-e237-4fee-96b9-3f7160b22153] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 16:19:25.220517 1560718 retry.go:31] will retry after 277.959719ms: missing components: kube-dns
	I1213 16:19:25.503733 1560718 system_pods.go:86] 8 kube-system pods found
	I1213 16:19:25.503767 1560718 system_pods.go:89] "coredns-66bc5c9577-nf9mj" [a4b10ed1-885b-4d7f-88ef-aa799c2e5c69] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 16:19:25.503807 1560718 system_pods.go:89] "etcd-auto-023791" [00141623-af17-4430-9329-d659e73aa887] Running
	I1213 16:19:25.503817 1560718 system_pods.go:89] "kindnet-wrffr" [204b3c2e-045a-495f-bff3-4862269ab7c3] Running
	I1213 16:19:25.503823 1560718 system_pods.go:89] "kube-apiserver-auto-023791" [1fa0514b-8ab8-4302-bf87-f1c2782b1518] Running
	I1213 16:19:25.503827 1560718 system_pods.go:89] "kube-controller-manager-auto-023791" [681c9d46-a1f1-4cdb-9119-7de6fd6c3f63] Running
	I1213 16:19:25.503846 1560718 system_pods.go:89] "kube-proxy-hnqhc" [0e2a07fb-feb9-4a3d-8f67-b13b6443e727] Running
	I1213 16:19:25.503853 1560718 system_pods.go:89] "kube-scheduler-auto-023791" [f5057329-bdc2-45f9-980e-2fa903e8472f] Running
	I1213 16:19:25.503859 1560718 system_pods.go:89] "storage-provisioner" [6f23232c-e237-4fee-96b9-3f7160b22153] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 16:19:25.503893 1560718 retry.go:31] will retry after 382.540558ms: missing components: kube-dns
	I1213 16:19:25.891507 1560718 system_pods.go:86] 8 kube-system pods found
	I1213 16:19:25.891542 1560718 system_pods.go:89] "coredns-66bc5c9577-nf9mj" [a4b10ed1-885b-4d7f-88ef-aa799c2e5c69] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 16:19:25.891548 1560718 system_pods.go:89] "etcd-auto-023791" [00141623-af17-4430-9329-d659e73aa887] Running
	I1213 16:19:25.891574 1560718 system_pods.go:89] "kindnet-wrffr" [204b3c2e-045a-495f-bff3-4862269ab7c3] Running
	I1213 16:19:25.891583 1560718 system_pods.go:89] "kube-apiserver-auto-023791" [1fa0514b-8ab8-4302-bf87-f1c2782b1518] Running
	I1213 16:19:25.891588 1560718 system_pods.go:89] "kube-controller-manager-auto-023791" [681c9d46-a1f1-4cdb-9119-7de6fd6c3f63] Running
	I1213 16:19:25.891592 1560718 system_pods.go:89] "kube-proxy-hnqhc" [0e2a07fb-feb9-4a3d-8f67-b13b6443e727] Running
	I1213 16:19:25.891602 1560718 system_pods.go:89] "kube-scheduler-auto-023791" [f5057329-bdc2-45f9-980e-2fa903e8472f] Running
	I1213 16:19:25.891608 1560718 system_pods.go:89] "storage-provisioner" [6f23232c-e237-4fee-96b9-3f7160b22153] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 16:19:25.891626 1560718 retry.go:31] will retry after 324.850586ms: missing components: kube-dns
	I1213 16:19:26.221398 1560718 system_pods.go:86] 8 kube-system pods found
	I1213 16:19:26.221470 1560718 system_pods.go:89] "coredns-66bc5c9577-nf9mj" [a4b10ed1-885b-4d7f-88ef-aa799c2e5c69] Running
	I1213 16:19:26.221491 1560718 system_pods.go:89] "etcd-auto-023791" [00141623-af17-4430-9329-d659e73aa887] Running
	I1213 16:19:26.221510 1560718 system_pods.go:89] "kindnet-wrffr" [204b3c2e-045a-495f-bff3-4862269ab7c3] Running
	I1213 16:19:26.221550 1560718 system_pods.go:89] "kube-apiserver-auto-023791" [1fa0514b-8ab8-4302-bf87-f1c2782b1518] Running
	I1213 16:19:26.221575 1560718 system_pods.go:89] "kube-controller-manager-auto-023791" [681c9d46-a1f1-4cdb-9119-7de6fd6c3f63] Running
	I1213 16:19:26.221603 1560718 system_pods.go:89] "kube-proxy-hnqhc" [0e2a07fb-feb9-4a3d-8f67-b13b6443e727] Running
	I1213 16:19:26.221623 1560718 system_pods.go:89] "kube-scheduler-auto-023791" [f5057329-bdc2-45f9-980e-2fa903e8472f] Running
	I1213 16:19:26.221658 1560718 system_pods.go:89] "storage-provisioner" [6f23232c-e237-4fee-96b9-3f7160b22153] Running
	I1213 16:19:26.221688 1560718 system_pods.go:126] duration metric: took 1.005065144s to wait for k8s-apps to be running ...
	I1213 16:19:26.221709 1560718 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 16:19:26.221794 1560718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 16:19:26.239891 1560718 system_svc.go:56] duration metric: took 18.172864ms WaitForService to wait for kubelet
	I1213 16:19:26.239921 1560718 kubeadm.go:587] duration metric: took 13.098168127s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 16:19:26.239958 1560718 node_conditions.go:102] verifying NodePressure condition ...
	I1213 16:19:26.243784 1560718 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 16:19:26.243818 1560718 node_conditions.go:123] node cpu capacity is 2
	I1213 16:19:26.243832 1560718 node_conditions.go:105] duration metric: took 3.864702ms to run NodePressure ...
	I1213 16:19:26.243866 1560718 start.go:242] waiting for startup goroutines ...
	I1213 16:19:26.243882 1560718 start.go:247] waiting for cluster config update ...
	I1213 16:19:26.243897 1560718 start.go:256] writing updated cluster config ...
	I1213 16:19:26.244193 1560718 ssh_runner.go:195] Run: rm -f paused
	I1213 16:19:26.248296 1560718 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 16:19:26.254211 1560718 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nf9mj" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 16:19:26.259076 1560718 pod_ready.go:94] pod "coredns-66bc5c9577-nf9mj" is "Ready"
	I1213 16:19:26.259102 1560718 pod_ready.go:86] duration metric: took 4.864294ms for pod "coredns-66bc5c9577-nf9mj" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 16:19:26.261456 1560718 pod_ready.go:83] waiting for pod "etcd-auto-023791" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 16:19:26.265966 1560718 pod_ready.go:94] pod "etcd-auto-023791" is "Ready"
	I1213 16:19:26.265997 1560718 pod_ready.go:86] duration metric: took 4.514911ms for pod "etcd-auto-023791" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 16:19:26.268633 1560718 pod_ready.go:83] waiting for pod "kube-apiserver-auto-023791" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 16:19:26.273498 1560718 pod_ready.go:94] pod "kube-apiserver-auto-023791" is "Ready"
	I1213 16:19:26.273528 1560718 pod_ready.go:86] duration metric: took 4.867601ms for pod "kube-apiserver-auto-023791" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 16:19:26.276467 1560718 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-023791" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 16:19:26.652776 1560718 pod_ready.go:94] pod "kube-controller-manager-auto-023791" is "Ready"
	I1213 16:19:26.652809 1560718 pod_ready.go:86] duration metric: took 376.309594ms for pod "kube-controller-manager-auto-023791" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 16:19:26.853498 1560718 pod_ready.go:83] waiting for pod "kube-proxy-hnqhc" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 16:19:27.252645 1560718 pod_ready.go:94] pod "kube-proxy-hnqhc" is "Ready"
	I1213 16:19:27.252671 1560718 pod_ready.go:86] duration metric: took 399.144788ms for pod "kube-proxy-hnqhc" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 16:19:27.452777 1560718 pod_ready.go:83] waiting for pod "kube-scheduler-auto-023791" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 16:19:27.853450 1560718 pod_ready.go:94] pod "kube-scheduler-auto-023791" is "Ready"
	I1213 16:19:27.853475 1560718 pod_ready.go:86] duration metric: took 400.670259ms for pod "kube-scheduler-auto-023791" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 16:19:27.853488 1560718 pod_ready.go:40] duration metric: took 1.605156512s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 16:19:27.912096 1560718 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1213 16:19:27.917466 1560718 out.go:179] * Done! kubectl is now configured to use "auto-023791" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216398345Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216470499Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216572930Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216649974Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216720996Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216786135Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216843479Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216912187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216985974Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.217088848Z" level=info msg="Connect containerd service"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.217463198Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.218120758Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.231205084Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.231272274Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.231345659Z" level=info msg="Start subscribing containerd event"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.231396062Z" level=info msg="Start recovering state"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.254976526Z" level=info msg="Start event monitor"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.255192266Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.255261828Z" level=info msg="Start streaming server"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.255422735Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.255487619Z" level=info msg="runtime interface starting up..."
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.255541731Z" level=info msg="starting plugins..."
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.255628375Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 16:04:48 no-preload-439544 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.257678755Z" level=info msg="containerd successfully booted in 0.068392s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:19:53.885896    8167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:19:53.886593    8167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:19:53.888749    8167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:19:53.889276    8167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:19:53.890787    8167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.759368] overlayfs: idmapped layers are currently not supported
	[Dec13 13:37] overlayfs: idmapped layers are currently not supported
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 16:19:53 up  8:02,  0 user,  load average: 2.19, 1.15, 1.16
	Linux no-preload-439544 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 16:19:50 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:19:51 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1202.
	Dec 13 16:19:51 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:19:51 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:19:51 no-preload-439544 kubelet[8032]: E1213 16:19:51.392384    8032 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:19:51 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:19:51 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:19:52 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1203.
	Dec 13 16:19:52 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:19:52 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:19:52 no-preload-439544 kubelet[8038]: E1213 16:19:52.165979    8038 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:19:52 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:19:52 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:19:52 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1204.
	Dec 13 16:19:52 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:19:52 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:19:52 no-preload-439544 kubelet[8060]: E1213 16:19:52.947749    8060 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:19:52 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:19:52 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:19:53 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1205.
	Dec 13 16:19:53 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:19:53 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:19:53 no-preload-439544 kubelet[8125]: E1213 16:19:53.681513    8125 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:19:53 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:19:53 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-439544 -n no-preload-439544
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-439544 -n no-preload-439544: exit status 2 (494.813355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-439544" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (373.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-526531 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-526531 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 105 (6m8.603639937s)

                                                
                                                
-- stdout --
	* [newest-cni-526531] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "newest-cni-526531" primary control-plane node in "newest-cni-526531" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 16:12:13.872500 1542350 out.go:360] Setting OutFile to fd 1 ...
	I1213 16:12:13.872721 1542350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:12:13.872749 1542350 out.go:374] Setting ErrFile to fd 2...
	I1213 16:12:13.872769 1542350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:12:13.873083 1542350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 16:12:13.873513 1542350 out.go:368] Setting JSON to false
	I1213 16:12:13.874453 1542350 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":28483,"bootTime":1765613851,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 16:12:13.874604 1542350 start.go:143] virtualization:  
	I1213 16:12:13.877765 1542350 out.go:179] * [newest-cni-526531] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 16:12:13.881549 1542350 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 16:12:13.881619 1542350 notify.go:221] Checking for updates...
	I1213 16:12:13.887324 1542350 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 16:12:13.890274 1542350 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:12:13.893162 1542350 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 16:12:13.896033 1542350 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 16:12:13.898948 1542350 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 16:12:13.902364 1542350 config.go:182] Loaded profile config "newest-cni-526531": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:12:13.902980 1542350 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 16:12:13.935990 1542350 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 16:12:13.936167 1542350 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:12:14.000058 1542350 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:12:13.991072746 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:12:14.000167 1542350 docker.go:319] overlay module found
	I1213 16:12:14.005438 1542350 out.go:179] * Using the docker driver based on existing profile
	I1213 16:12:14.008564 1542350 start.go:309] selected driver: docker
	I1213 16:12:14.008597 1542350 start.go:927] validating driver "docker" against &{Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:12:14.008696 1542350 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 16:12:14.009457 1542350 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:12:14.067852 1542350 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:12:14.058134833 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:12:14.068237 1542350 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 16:12:14.068271 1542350 cni.go:84] Creating CNI manager for ""
	I1213 16:12:14.068329 1542350 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:12:14.068382 1542350 start.go:353] cluster config:
	{Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:12:14.071643 1542350 out.go:179] * Starting "newest-cni-526531" primary control-plane node in "newest-cni-526531" cluster
	I1213 16:12:14.074436 1542350 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 16:12:14.077449 1542350 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 16:12:14.080394 1542350 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:12:14.080442 1542350 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 16:12:14.080452 1542350 cache.go:65] Caching tarball of preloaded images
	I1213 16:12:14.080507 1542350 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 16:12:14.080564 1542350 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 16:12:14.080575 1542350 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 16:12:14.080690 1542350 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/config.json ...
	I1213 16:12:14.101187 1542350 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 16:12:14.101205 1542350 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 16:12:14.101219 1542350 cache.go:243] Successfully downloaded all kic artifacts
	I1213 16:12:14.101249 1542350 start.go:360] acquireMachinesLock for newest-cni-526531: {Name:mk8328ad899404812480e264edd6e13cbbd26230 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:12:14.101300 1542350 start.go:364] duration metric: took 35.502µs to acquireMachinesLock for "newest-cni-526531"
	I1213 16:12:14.101319 1542350 start.go:96] Skipping create...Using existing machine configuration
	I1213 16:12:14.101324 1542350 fix.go:54] fixHost starting: 
	I1213 16:12:14.101579 1542350 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:12:14.120089 1542350 fix.go:112] recreateIfNeeded on newest-cni-526531: state=Stopped err=<nil>
	W1213 16:12:14.120117 1542350 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 16:12:14.123566 1542350 out.go:252] * Restarting existing docker container for "newest-cni-526531" ...
	I1213 16:12:14.123658 1542350 cli_runner.go:164] Run: docker start newest-cni-526531
	I1213 16:12:14.407857 1542350 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:12:14.431483 1542350 kic.go:430] container "newest-cni-526531" state is running.
	I1213 16:12:14.431880 1542350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:12:14.455073 1542350 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/config.json ...
	I1213 16:12:14.455509 1542350 machine.go:94] provisionDockerMachine start ...
	I1213 16:12:14.455579 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:14.483076 1542350 main.go:143] libmachine: Using SSH client type: native
	I1213 16:12:14.483636 1542350 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34233 <nil> <nil>}
	I1213 16:12:14.483652 1542350 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 16:12:14.484350 1542350 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 16:12:17.634930 1542350 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-526531
	
	I1213 16:12:17.634954 1542350 ubuntu.go:182] provisioning hostname "newest-cni-526531"
	I1213 16:12:17.635019 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:17.654681 1542350 main.go:143] libmachine: Using SSH client type: native
	I1213 16:12:17.654996 1542350 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34233 <nil> <nil>}
	I1213 16:12:17.655008 1542350 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-526531 && echo "newest-cni-526531" | sudo tee /etc/hostname
	I1213 16:12:17.812861 1542350 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-526531
	
	I1213 16:12:17.812938 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:17.830348 1542350 main.go:143] libmachine: Using SSH client type: native
	I1213 16:12:17.830658 1542350 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34233 <nil> <nil>}
	I1213 16:12:17.830675 1542350 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-526531' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-526531/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-526531' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 16:12:17.987587 1542350 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 16:12:17.987621 1542350 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 16:12:17.987641 1542350 ubuntu.go:190] setting up certificates
	I1213 16:12:17.987659 1542350 provision.go:84] configureAuth start
	I1213 16:12:17.987726 1542350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:12:18.011145 1542350 provision.go:143] copyHostCerts
	I1213 16:12:18.011230 1542350 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 16:12:18.011240 1542350 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 16:12:18.011430 1542350 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 16:12:18.011569 1542350 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 16:12:18.011584 1542350 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 16:12:18.011623 1542350 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 16:12:18.011690 1542350 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 16:12:18.011698 1542350 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 16:12:18.011724 1542350 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 16:12:18.011776 1542350 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.newest-cni-526531 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-526531]
	I1213 16:12:18.508738 1542350 provision.go:177] copyRemoteCerts
	I1213 16:12:18.508811 1542350 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 16:12:18.508861 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:18.526422 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:18.636742 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 16:12:18.655155 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 16:12:18.674107 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 16:12:18.692128 1542350 provision.go:87] duration metric: took 704.439864ms to configureAuth
	I1213 16:12:18.692158 1542350 ubuntu.go:206] setting minikube options for container-runtime
	I1213 16:12:18.692373 1542350 config.go:182] Loaded profile config "newest-cni-526531": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:12:18.692387 1542350 machine.go:97] duration metric: took 4.236863655s to provisionDockerMachine
	I1213 16:12:18.692395 1542350 start.go:293] postStartSetup for "newest-cni-526531" (driver="docker")
	I1213 16:12:18.692409 1542350 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 16:12:18.692476 1542350 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 16:12:18.692523 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:18.710444 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:18.815900 1542350 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 16:12:18.819552 1542350 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 16:12:18.819582 1542350 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 16:12:18.819595 1542350 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 16:12:18.819651 1542350 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 16:12:18.819740 1542350 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 16:12:18.819846 1542350 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 16:12:18.827635 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:12:18.845967 1542350 start.go:296] duration metric: took 153.553828ms for postStartSetup
	I1213 16:12:18.846048 1542350 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 16:12:18.846103 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:18.863404 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:18.964333 1542350 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 16:12:18.969276 1542350 fix.go:56] duration metric: took 4.867943668s for fixHost
	I1213 16:12:18.969308 1542350 start.go:83] releasing machines lock for "newest-cni-526531", held for 4.867999692s
	I1213 16:12:18.969378 1542350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:12:18.986065 1542350 ssh_runner.go:195] Run: cat /version.json
	I1213 16:12:18.986168 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:18.986433 1542350 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 16:12:18.986485 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:19.008809 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:19.015681 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:19.197190 1542350 ssh_runner.go:195] Run: systemctl --version
	I1213 16:12:19.203734 1542350 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 16:12:19.208293 1542350 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 16:12:19.208365 1542350 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 16:12:19.216699 1542350 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 16:12:19.216724 1542350 start.go:496] detecting cgroup driver to use...
	I1213 16:12:19.216769 1542350 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 16:12:19.216822 1542350 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 16:12:19.235051 1542350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 16:12:19.248627 1542350 docker.go:218] disabling cri-docker service (if available) ...
	I1213 16:12:19.248695 1542350 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 16:12:19.264536 1542350 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 16:12:19.278273 1542350 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 16:12:19.415282 1542350 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 16:12:19.542944 1542350 docker.go:234] disabling docker service ...
	I1213 16:12:19.543049 1542350 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 16:12:19.558893 1542350 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 16:12:19.572698 1542350 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 16:12:19.700893 1542350 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 16:12:19.830331 1542350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 16:12:19.843617 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 16:12:19.858193 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 16:12:19.867834 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 16:12:19.877291 1542350 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 16:12:19.877362 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 16:12:19.886078 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:12:19.894812 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 16:12:19.903917 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:12:19.912720 1542350 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 16:12:19.921167 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 16:12:19.930798 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 16:12:19.940230 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 16:12:19.950040 1542350 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 16:12:19.958360 1542350 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 16:12:19.966286 1542350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:12:20.089676 1542350 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 16:12:20.224467 1542350 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 16:12:20.224608 1542350 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 16:12:20.228661 1542350 start.go:564] Will wait 60s for crictl version
	I1213 16:12:20.228772 1542350 ssh_runner.go:195] Run: which crictl
	I1213 16:12:20.232454 1542350 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 16:12:20.257719 1542350 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 16:12:20.257840 1542350 ssh_runner.go:195] Run: containerd --version
	I1213 16:12:20.279500 1542350 ssh_runner.go:195] Run: containerd --version
	I1213 16:12:20.302783 1542350 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 16:12:20.305579 1542350 cli_runner.go:164] Run: docker network inspect newest-cni-526531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 16:12:20.322844 1542350 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 16:12:20.326903 1542350 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:12:20.339926 1542350 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 16:12:20.342782 1542350 kubeadm.go:884] updating cluster {Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 16:12:20.342928 1542350 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:12:20.343016 1542350 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:12:20.367771 1542350 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:12:20.367795 1542350 containerd.go:534] Images already preloaded, skipping extraction
	I1213 16:12:20.367857 1542350 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:12:20.393096 1542350 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:12:20.393118 1542350 cache_images.go:86] Images are preloaded, skipping loading
	I1213 16:12:20.393126 1542350 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 16:12:20.393232 1542350 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-526531 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 16:12:20.393305 1542350 ssh_runner.go:195] Run: sudo crictl info
	I1213 16:12:20.418251 1542350 cni.go:84] Creating CNI manager for ""
	I1213 16:12:20.418277 1542350 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:12:20.418295 1542350 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 16:12:20.418318 1542350 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-526531 NodeName:newest-cni-526531 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 16:12:20.418435 1542350 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-526531"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 16:12:20.418510 1542350 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 16:12:20.426561 1542350 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 16:12:20.426663 1542350 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 16:12:20.434234 1542350 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 16:12:20.447269 1542350 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 16:12:20.459764 1542350 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 16:12:20.473147 1542350 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 16:12:20.476975 1542350 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:12:20.486881 1542350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:12:20.634044 1542350 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:12:20.650082 1542350 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531 for IP: 192.168.76.2
	I1213 16:12:20.650107 1542350 certs.go:195] generating shared ca certs ...
	I1213 16:12:20.650125 1542350 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:12:20.650260 1542350 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 16:12:20.650315 1542350 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 16:12:20.650327 1542350 certs.go:257] generating profile certs ...
	I1213 16:12:20.650431 1542350 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.key
	I1213 16:12:20.650494 1542350 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key.b007e6c7
	I1213 16:12:20.650541 1542350 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key
	I1213 16:12:20.650652 1542350 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 16:12:20.650691 1542350 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 16:12:20.650704 1542350 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 16:12:20.650731 1542350 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 16:12:20.650764 1542350 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 16:12:20.650791 1542350 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 16:12:20.650844 1542350 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:12:20.651682 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 16:12:20.679737 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 16:12:20.697714 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 16:12:20.716102 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 16:12:20.734754 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 16:12:20.752380 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 16:12:20.770335 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 16:12:20.787592 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1213 16:12:20.805866 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 16:12:20.823616 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 16:12:20.845606 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 16:12:20.863659 1542350 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 16:12:20.877321 1542350 ssh_runner.go:195] Run: openssl version
	I1213 16:12:20.884096 1542350 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 16:12:20.891462 1542350 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 16:12:20.900719 1542350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 16:12:20.905878 1542350 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 16:12:20.905990 1542350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 16:12:20.952615 1542350 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 16:12:20.960412 1542350 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:12:20.967994 1542350 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 16:12:20.975909 1542350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:12:20.979941 1542350 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:12:20.980042 1542350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:12:21.021453 1542350 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 16:12:21.029467 1542350 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 16:12:21.037114 1542350 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 16:12:21.045054 1542350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 16:12:21.049353 1542350 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 16:12:21.049420 1542350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 16:12:21.090431 1542350 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 16:12:21.097998 1542350 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 16:12:21.101759 1542350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 16:12:21.142651 1542350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 16:12:21.183449 1542350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 16:12:21.224713 1542350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 16:12:21.267101 1542350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 16:12:21.308542 1542350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 16:12:21.350324 1542350 kubeadm.go:401] StartCluster: {Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:12:21.350489 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 16:12:21.350594 1542350 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 16:12:21.381089 1542350 cri.go:89] found id: ""
	I1213 16:12:21.381225 1542350 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 16:12:21.391210 1542350 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 16:12:21.391281 1542350 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 16:12:21.391387 1542350 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 16:12:21.399153 1542350 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 16:12:21.399882 1542350 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-526531" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:12:21.400209 1542350 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-1251074/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-526531" cluster setting kubeconfig missing "newest-cni-526531" context setting]
	I1213 16:12:21.400761 1542350 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:12:21.402579 1542350 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 16:12:21.410218 1542350 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1213 16:12:21.410252 1542350 kubeadm.go:602] duration metric: took 18.943347ms to restartPrimaryControlPlane
	I1213 16:12:21.410262 1542350 kubeadm.go:403] duration metric: took 59.957451ms to StartCluster
	I1213 16:12:21.410276 1542350 settings.go:142] acquiring lock: {Name:mk3e38eb9635dd950ddf8081cc0598a8c10049c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:12:21.410337 1542350 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:12:21.411206 1542350 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:12:21.411496 1542350 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 16:12:21.411842 1542350 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 16:12:21.411918 1542350 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-526531"
	I1213 16:12:21.411932 1542350 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-526531"
	I1213 16:12:21.411959 1542350 host.go:66] Checking if "newest-cni-526531" exists ...
	I1213 16:12:21.412409 1542350 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:12:21.412632 1542350 config.go:182] Loaded profile config "newest-cni-526531": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:12:21.412699 1542350 addons.go:70] Setting dashboard=true in profile "newest-cni-526531"
	I1213 16:12:21.412715 1542350 addons.go:239] Setting addon dashboard=true in "newest-cni-526531"
	W1213 16:12:21.412722 1542350 addons.go:248] addon dashboard should already be in state true
	I1213 16:12:21.412753 1542350 host.go:66] Checking if "newest-cni-526531" exists ...
	I1213 16:12:21.413150 1542350 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:12:21.417035 1542350 addons.go:70] Setting default-storageclass=true in profile "newest-cni-526531"
	I1213 16:12:21.417076 1542350 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-526531"
	I1213 16:12:21.417425 1542350 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:12:21.417785 1542350 out.go:179] * Verifying Kubernetes components...
	I1213 16:12:21.420756 1542350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:12:21.445354 1542350 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 16:12:21.448121 1542350 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:12:21.448150 1542350 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 16:12:21.448220 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:21.451677 1542350 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 16:12:21.454559 1542350 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 16:12:21.457364 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 16:12:21.457390 1542350 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 16:12:21.457468 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:21.461079 1542350 addons.go:239] Setting addon default-storageclass=true in "newest-cni-526531"
	I1213 16:12:21.461127 1542350 host.go:66] Checking if "newest-cni-526531" exists ...
	I1213 16:12:21.461533 1542350 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:12:21.475798 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:21.512911 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:21.534060 1542350 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 16:12:21.534082 1542350 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 16:12:21.534143 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:21.567579 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:21.655778 1542350 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:12:21.660712 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:12:21.695006 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 16:12:21.695031 1542350 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 16:12:21.711844 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 16:12:21.711868 1542350 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 16:12:21.726264 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 16:12:21.726287 1542350 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 16:12:21.742159 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 16:12:21.742183 1542350 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 16:12:21.759213 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 16:12:21.759234 1542350 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 16:12:21.769713 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 16:12:21.791192 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 16:12:21.791260 1542350 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 16:12:21.814992 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 16:12:21.815063 1542350 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 16:12:21.830895 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 16:12:21.830972 1542350 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 16:12:21.849742 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:12:21.849815 1542350 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 16:12:21.864289 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:12:22.085788 1542350 api_server.go:52] waiting for apiserver process to appear ...
	I1213 16:12:22.085922 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 16:12:22.086102 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.086159 1542350 retry.go:31] will retry after 179.056392ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:12:22.086246 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.086353 1542350 retry.go:31] will retry after 181.278424ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:12:22.086609 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.086645 1542350 retry.go:31] will retry after 135.21458ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.222538 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:12:22.266024 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:12:22.268540 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:22.304395 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.304479 1542350 retry.go:31] will retry after 553.734459ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:12:22.383592 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.383626 1542350 retry.go:31] will retry after 310.627988ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:12:22.384428 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.384454 1542350 retry.go:31] will retry after 477.647599ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.586862 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:22.695343 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:22.754692 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.754771 1542350 retry.go:31] will retry after 349.01084ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.858966 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:12:22.862536 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:22.953516 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.953561 1542350 retry.go:31] will retry after 343.489775ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:12:22.953788 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.953849 1542350 retry.go:31] will retry after 703.913124ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.086088 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:23.104680 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:23.181935 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.181974 1542350 retry.go:31] will retry after 792.501261ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.297213 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:23.357629 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.357664 1542350 retry.go:31] will retry after 710.733017ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.586938 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:23.658890 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:12:23.729079 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.729127 1542350 retry.go:31] will retry after 642.679357ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.975021 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:24.036696 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.036729 1542350 retry.go:31] will retry after 1.762152539s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.068939 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 16:12:24.086560 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 16:12:24.136068 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.136100 1542350 retry.go:31] will retry after 670.883469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.372395 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:12:24.444952 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.444996 1542350 retry.go:31] will retry after 1.594344916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.586388 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:24.807252 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:24.873210 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.873241 1542350 retry.go:31] will retry after 1.504699438s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:25.086635 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:25.586697 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:25.799081 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:25.864095 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:25.864173 1542350 retry.go:31] will retry after 2.833515163s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:26.040555 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:12:26.086244 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 16:12:26.134589 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:26.134626 1542350 retry.go:31] will retry after 2.268954348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:26.378204 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:26.437143 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:26.437179 1542350 retry.go:31] will retry after 2.009206759s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:26.586404 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:27.086045 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:27.586074 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:28.086070 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:28.404537 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:12:28.446967 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:28.469203 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:28.469234 1542350 retry.go:31] will retry after 1.799417627s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:12:28.516574 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:28.516611 1542350 retry.go:31] will retry after 2.723803306s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:28.586847 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:28.698086 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:28.762693 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:28.762729 1542350 retry.go:31] will retry after 1.577559772s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:29.086307 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:29.586102 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:30.086078 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:30.269847 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:12:30.336710 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:30.336749 1542350 retry.go:31] will retry after 2.535864228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:30.341075 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:30.419871 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:30.419902 1542350 retry.go:31] will retry after 2.188608586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:30.586056 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:31.086792 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:31.241343 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:31.303140 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:31.303175 1542350 retry.go:31] will retry after 4.008884548s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:31.586821 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:32.086175 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:32.587018 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:32.608868 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:32.689818 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:32.689856 1542350 retry.go:31] will retry after 5.074576061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:32.873213 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:12:32.940949 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:32.940984 1542350 retry.go:31] will retry after 7.456449925s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:33.086429 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:33.586022 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:34.086094 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:34.585998 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:35.086896 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:35.312254 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:35.377660 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:35.377698 1542350 retry.go:31] will retry after 9.192453055s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:35.587034 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:36.086843 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:36.586051 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:37.086838 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:37.586771 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:37.765048 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:37.824278 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:37.824312 1542350 retry.go:31] will retry after 11.772995815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:38.086864 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:38.586073 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:39.086969 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:39.586055 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:40.086122 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:40.398539 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:12:40.468470 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:40.468513 1542350 retry.go:31] will retry after 13.248485713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:40.586656 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:41.086065 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:41.586366 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:42.086189 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:42.586086 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:43.086089 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:43.586027 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:44.086574 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:44.570741 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 16:12:44.586247 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 16:12:44.654442 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:44.654477 1542350 retry.go:31] will retry after 14.969470504s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:45.086353 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:45.586835 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:46.086082 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:46.586716 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:47.086075 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:47.586621 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:48.086124 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:48.586928 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:49.087028 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:49.586115 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:49.597980 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:49.660643 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:49.660672 1542350 retry.go:31] will retry after 11.077380605s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:50.086194 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:50.586148 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:51.086673 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:51.586443 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:52.086098 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:52.586095 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:53.086117 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:53.586714 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:53.717290 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:12:53.777883 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:53.777918 1542350 retry.go:31] will retry after 17.242726639s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:54.086154 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:54.586837 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:55.086738 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:55.586843 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:56.086112 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:56.586065 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:57.087033 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:57.587026 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:58.086821 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:58.586066 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:59.086344 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:59.586987 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:59.624396 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:59.692077 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:59.692113 1542350 retry.go:31] will retry after 25.118824905s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:00.086703 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:00.586076 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:00.738326 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:13:00.797829 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:00.797860 1542350 retry.go:31] will retry after 28.273971977s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:01.086109 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:01.586093 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:02.086800 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:02.586059 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:03.086118 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:03.586099 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:04.086574 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:04.586119 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:05.087001 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:05.586735 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:06.087021 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:06.586098 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:07.086059 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:07.586074 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:08.086071 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:08.586627 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:09.086132 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:09.586339 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:10.086956 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:10.586102 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:11.020938 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:13:11.086782 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 16:13:11.098002 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:11.098037 1542350 retry.go:31] will retry after 28.022573365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:11.586801 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:12.086121 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:12.586779 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:13.086780 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:13.586110 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:14.086075 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:14.586725 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:15.086688 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:15.587040 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:16.086588 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:16.586972 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:17.086881 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:17.586014 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:18.086609 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:18.586065 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:19.086985 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:19.586109 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:20.086095 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:20.586709 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:21.086130 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:21.586680 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:21.586792 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:21.614864 1542350 cri.go:89] found id: ""
	I1213 16:13:21.614885 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.614894 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:21.614901 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:21.614963 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:21.646495 1542350 cri.go:89] found id: ""
	I1213 16:13:21.646517 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.646525 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:21.646532 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:21.646592 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:21.676251 1542350 cri.go:89] found id: ""
	I1213 16:13:21.676274 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.676283 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:21.676289 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:21.676358 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:21.706048 1542350 cri.go:89] found id: ""
	I1213 16:13:21.706075 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.706084 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:21.706093 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:21.706167 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:21.733595 1542350 cri.go:89] found id: ""
	I1213 16:13:21.733620 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.733628 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:21.733634 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:21.733694 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:21.758418 1542350 cri.go:89] found id: ""
	I1213 16:13:21.758444 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.758453 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:21.758459 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:21.758520 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:21.782936 1542350 cri.go:89] found id: ""
	I1213 16:13:21.782962 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.782970 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:21.782976 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:21.783038 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:21.807262 1542350 cri.go:89] found id: ""
	I1213 16:13:21.807289 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.807298 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:21.807327 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:21.807340 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:21.862632 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:21.862670 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:21.879878 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:21.879905 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:21.954675 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:21.946473    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.946958    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.948653    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.949095    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.950567    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:21.946473    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.946958    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.948653    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.949095    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.950567    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:21.954699 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:21.954712 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:21.980443 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:21.980489 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:24.514188 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:24.524708 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:24.524788 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:24.549819 1542350 cri.go:89] found id: ""
	I1213 16:13:24.549840 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.549848 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:24.549866 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:24.549925 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:24.574754 1542350 cri.go:89] found id: ""
	I1213 16:13:24.574781 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.574790 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:24.574795 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:24.574857 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:24.606443 1542350 cri.go:89] found id: ""
	I1213 16:13:24.606465 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.606474 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:24.606481 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:24.606542 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:24.638639 1542350 cri.go:89] found id: ""
	I1213 16:13:24.638660 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.638668 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:24.638674 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:24.638733 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:24.671023 1542350 cri.go:89] found id: ""
	I1213 16:13:24.671046 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.671055 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:24.671063 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:24.671137 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:24.697378 1542350 cri.go:89] found id: ""
	I1213 16:13:24.697405 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.697414 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:24.697420 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:24.697497 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:24.722594 1542350 cri.go:89] found id: ""
	I1213 16:13:24.722621 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.722631 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:24.722637 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:24.722728 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:24.746821 1542350 cri.go:89] found id: ""
	I1213 16:13:24.746850 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.746860 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:24.746878 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:24.746891 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:24.763249 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:24.763286 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 16:13:24.811678 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:13:24.851435 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:24.840548    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.841317    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.843094    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.843728    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.845484    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:24.840548    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.841317    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.843094    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.843728    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.845484    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:24.851500 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:24.851539 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1213 16:13:24.879668 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:24.879746 1542350 retry.go:31] will retry after 33.423455906s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:24.890839 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:24.890870 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:24.920848 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:24.920877 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:27.476632 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:27.488585 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:27.488659 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:27.518011 1542350 cri.go:89] found id: ""
	I1213 16:13:27.518034 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.518042 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:27.518049 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:27.518110 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:27.543732 1542350 cri.go:89] found id: ""
	I1213 16:13:27.543759 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.543771 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:27.543777 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:27.543862 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:27.568999 1542350 cri.go:89] found id: ""
	I1213 16:13:27.569025 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.569033 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:27.569039 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:27.569097 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:27.607884 1542350 cri.go:89] found id: ""
	I1213 16:13:27.607913 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.607921 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:27.607928 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:27.607987 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:27.644349 1542350 cri.go:89] found id: ""
	I1213 16:13:27.644376 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.644384 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:27.644390 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:27.644461 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:27.676832 1542350 cri.go:89] found id: ""
	I1213 16:13:27.676860 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.676870 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:27.676875 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:27.676934 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:27.702113 1542350 cri.go:89] found id: ""
	I1213 16:13:27.702142 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.702151 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:27.702157 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:27.702219 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:27.727737 1542350 cri.go:89] found id: ""
	I1213 16:13:27.727763 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.727772 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:27.727782 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:27.727795 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:27.782283 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:27.782317 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:27.800167 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:27.800195 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:27.871267 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:27.862487    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.863167    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.864974    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.865561    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.867199    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:27.862487    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.863167    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.864974    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.865561    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.867199    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:27.871378 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:27.871398 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:27.896932 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:27.896972 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:29.072145 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:13:29.152200 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:29.152237 1542350 retry.go:31] will retry after 45.772066333s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:30.424283 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:30.435064 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:30.435141 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:30.458954 1542350 cri.go:89] found id: ""
	I1213 16:13:30.458977 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.458985 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:30.458991 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:30.459050 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:30.482988 1542350 cri.go:89] found id: ""
	I1213 16:13:30.483016 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.483025 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:30.483031 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:30.483089 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:30.508669 1542350 cri.go:89] found id: ""
	I1213 16:13:30.508695 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.508704 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:30.508710 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:30.508797 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:30.532450 1542350 cri.go:89] found id: ""
	I1213 16:13:30.532543 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.532561 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:30.532569 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:30.532643 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:30.561998 1542350 cri.go:89] found id: ""
	I1213 16:13:30.562026 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.562035 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:30.562041 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:30.562132 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:30.600654 1542350 cri.go:89] found id: ""
	I1213 16:13:30.600688 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.600703 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:30.600711 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:30.600824 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:30.628653 1542350 cri.go:89] found id: ""
	I1213 16:13:30.628724 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.628758 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:30.628798 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:30.628886 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:30.659930 1542350 cri.go:89] found id: ""
	I1213 16:13:30.660009 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.660032 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:30.660049 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:30.660076 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:30.717289 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:30.717327 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:30.733637 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:30.733668 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:30.804923 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:30.797049    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.797717    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.799180    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.799496    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.800891    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:30.797049    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.797717    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.799180    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.799496    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.800891    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:30.804949 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:30.804966 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:30.830439 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:30.830482 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:33.359431 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:33.370707 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:33.370778 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:33.404091 1542350 cri.go:89] found id: ""
	I1213 16:13:33.404114 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.404135 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:33.404141 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:33.404200 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:33.432896 1542350 cri.go:89] found id: ""
	I1213 16:13:33.432922 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.432931 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:33.432937 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:33.433006 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:33.457244 1542350 cri.go:89] found id: ""
	I1213 16:13:33.457271 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.457280 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:33.457285 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:33.457343 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:33.482368 1542350 cri.go:89] found id: ""
	I1213 16:13:33.482389 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.482397 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:33.482403 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:33.482463 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:33.506253 1542350 cri.go:89] found id: ""
	I1213 16:13:33.506276 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.506284 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:33.506290 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:33.506350 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:33.532337 1542350 cri.go:89] found id: ""
	I1213 16:13:33.532362 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.532371 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:33.532377 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:33.532435 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:33.557859 1542350 cri.go:89] found id: ""
	I1213 16:13:33.557887 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.557896 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:33.557902 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:33.557961 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:33.585180 1542350 cri.go:89] found id: ""
	I1213 16:13:33.585208 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.585216 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:33.585226 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:33.585249 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:33.626301 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:33.626332 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:33.693048 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:33.693086 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:33.709482 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:33.709550 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:33.779437 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:33.771529    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.771977    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.773496    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.773820    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.775408    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:33.771529    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.771977    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.773496    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.773820    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.775408    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:33.779461 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:33.779476 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:36.314080 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:36.324714 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:36.324793 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:36.352949 1542350 cri.go:89] found id: ""
	I1213 16:13:36.353025 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.353048 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:36.353066 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:36.353159 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:36.384496 1542350 cri.go:89] found id: ""
	I1213 16:13:36.384563 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.384586 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:36.384603 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:36.384690 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:36.418779 1542350 cri.go:89] found id: ""
	I1213 16:13:36.418842 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.418866 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:36.418884 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:36.418968 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:36.448378 1542350 cri.go:89] found id: ""
	I1213 16:13:36.448420 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.448429 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:36.448445 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:36.448524 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:36.473284 1542350 cri.go:89] found id: ""
	I1213 16:13:36.473361 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.473376 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:36.473383 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:36.473454 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:36.500619 1542350 cri.go:89] found id: ""
	I1213 16:13:36.500642 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.500651 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:36.500663 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:36.500724 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:36.529444 1542350 cri.go:89] found id: ""
	I1213 16:13:36.529517 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.529532 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:36.529539 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:36.529609 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:36.553861 1542350 cri.go:89] found id: ""
	I1213 16:13:36.553886 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.553894 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:36.553904 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:36.553915 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:36.610671 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:36.610704 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:36.628462 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:36.628544 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:36.705883 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:36.697914    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.698707    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.700211    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.700636    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.702077    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:36.697914    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.698707    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.700211    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.700636    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.702077    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:36.705906 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:36.705918 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:36.730607 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:36.730646 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:39.121733 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:13:39.184741 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:39.184777 1542350 retry.go:31] will retry after 19.299456104s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:39.259892 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:39.271332 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:39.271403 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:39.300612 1542350 cri.go:89] found id: ""
	I1213 16:13:39.300637 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.300646 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:39.300652 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:39.300712 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:39.324641 1542350 cri.go:89] found id: ""
	I1213 16:13:39.324666 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.324675 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:39.324680 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:39.324739 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:39.356074 1542350 cri.go:89] found id: ""
	I1213 16:13:39.356099 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.356108 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:39.356114 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:39.356178 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:39.383742 1542350 cri.go:89] found id: ""
	I1213 16:13:39.383766 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.383775 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:39.383781 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:39.383846 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:39.411271 1542350 cri.go:89] found id: ""
	I1213 16:13:39.411297 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.411305 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:39.411334 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:39.411395 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:39.437295 1542350 cri.go:89] found id: ""
	I1213 16:13:39.437321 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.437329 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:39.437336 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:39.437419 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:39.462328 1542350 cri.go:89] found id: ""
	I1213 16:13:39.462352 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.462361 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:39.462368 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:39.462445 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:39.486926 1542350 cri.go:89] found id: ""
	I1213 16:13:39.486951 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.486961 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:39.486970 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:39.486986 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:39.545864 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:39.545902 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:39.561750 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:39.561780 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:39.648853 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:39.635798    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.636607    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.637647    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.638390    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.642907    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:39.635798    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.636607    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.637647    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.638390    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.642907    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:39.648878 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:39.648893 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:39.674238 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:39.674280 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:42.203005 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:42.217190 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:42.217290 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:42.248179 1542350 cri.go:89] found id: ""
	I1213 16:13:42.248214 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.248224 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:42.248231 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:42.248315 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:42.281373 1542350 cri.go:89] found id: ""
	I1213 16:13:42.281400 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.281409 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:42.281416 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:42.281481 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:42.313298 1542350 cri.go:89] found id: ""
	I1213 16:13:42.313327 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.313343 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:42.313351 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:42.313419 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:42.347164 1542350 cri.go:89] found id: ""
	I1213 16:13:42.347256 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.347274 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:42.347282 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:42.347421 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:42.377063 1542350 cri.go:89] found id: ""
	I1213 16:13:42.377097 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.377105 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:42.377112 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:42.377195 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:42.404395 1542350 cri.go:89] found id: ""
	I1213 16:13:42.404430 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.404439 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:42.404446 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:42.404522 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:42.429038 1542350 cri.go:89] found id: ""
	I1213 16:13:42.429112 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.429128 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:42.429135 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:42.429202 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:42.453891 1542350 cri.go:89] found id: ""
	I1213 16:13:42.453935 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.453944 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:42.453954 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:42.453970 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:42.509865 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:42.509901 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:42.525994 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:42.526022 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:42.601177 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:42.590564    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.592469    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.593053    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.594708    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.595182    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:42.590564    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.592469    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.593053    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.594708    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.595182    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:42.601257 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:42.601292 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:42.630417 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:42.630495 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:45.167780 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:45.186685 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:45.186786 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:45.266905 1542350 cri.go:89] found id: ""
	I1213 16:13:45.266931 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.266941 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:45.266948 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:45.267020 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:45.302244 1542350 cri.go:89] found id: ""
	I1213 16:13:45.302273 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.302283 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:45.302289 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:45.302368 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:45.330669 1542350 cri.go:89] found id: ""
	I1213 16:13:45.330697 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.330707 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:45.330713 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:45.330777 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:45.368642 1542350 cri.go:89] found id: ""
	I1213 16:13:45.368677 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.368685 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:45.368692 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:45.368753 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:45.407608 1542350 cri.go:89] found id: ""
	I1213 16:13:45.407631 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.407639 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:45.407645 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:45.407706 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:45.438077 1542350 cri.go:89] found id: ""
	I1213 16:13:45.438104 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.438112 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:45.438119 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:45.438178 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:45.467617 1542350 cri.go:89] found id: ""
	I1213 16:13:45.467645 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.467654 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:45.467660 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:45.467725 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:45.496715 1542350 cri.go:89] found id: ""
	I1213 16:13:45.496741 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.496750 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:45.496760 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:45.496771 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:45.522438 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:45.522475 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:45.554662 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:45.554691 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:45.614193 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:45.614275 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:45.631794 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:45.631875 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:45.701179 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:45.692225    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.692953    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.694656    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.695386    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.697149    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:45.692225    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.692953    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.694656    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.695386    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.697149    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:48.201848 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:48.212860 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:48.212934 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:48.241802 1542350 cri.go:89] found id: ""
	I1213 16:13:48.241830 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.241838 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:48.241845 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:48.241908 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:48.270100 1542350 cri.go:89] found id: ""
	I1213 16:13:48.270128 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.270137 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:48.270143 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:48.270207 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:48.295048 1542350 cri.go:89] found id: ""
	I1213 16:13:48.295073 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.295081 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:48.295087 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:48.295150 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:48.320949 1542350 cri.go:89] found id: ""
	I1213 16:13:48.320974 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.320983 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:48.320989 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:48.321048 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:48.357548 1542350 cri.go:89] found id: ""
	I1213 16:13:48.357572 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.357580 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:48.357586 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:48.357646 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:48.395642 1542350 cri.go:89] found id: ""
	I1213 16:13:48.395676 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.395685 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:48.395692 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:48.395761 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:48.426584 1542350 cri.go:89] found id: ""
	I1213 16:13:48.426611 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.426620 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:48.426626 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:48.426687 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:48.451854 1542350 cri.go:89] found id: ""
	I1213 16:13:48.451890 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.451899 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:48.451923 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:48.451938 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:48.508044 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:48.508086 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:48.523941 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:48.523971 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:48.594870 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:48.579201    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.579927    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.583793    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.584114    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.585609    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:48.579201    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.579927    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.583793    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.584114    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.585609    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:48.594893 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:48.594906 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:48.621999 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:48.622078 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:51.156024 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:51.167178 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:51.167252 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:51.198661 1542350 cri.go:89] found id: ""
	I1213 16:13:51.198684 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.198692 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:51.198699 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:51.198757 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:51.224046 1542350 cri.go:89] found id: ""
	I1213 16:13:51.224069 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.224077 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:51.224083 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:51.224149 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:51.253035 1542350 cri.go:89] found id: ""
	I1213 16:13:51.253062 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.253070 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:51.253076 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:51.253164 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:51.278917 1542350 cri.go:89] found id: ""
	I1213 16:13:51.278943 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.278952 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:51.278958 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:51.279016 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:51.305382 1542350 cri.go:89] found id: ""
	I1213 16:13:51.305405 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.305413 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:51.305419 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:51.305480 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:51.329703 1542350 cri.go:89] found id: ""
	I1213 16:13:51.329726 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.329735 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:51.329741 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:51.329800 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:51.359740 1542350 cri.go:89] found id: ""
	I1213 16:13:51.359762 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.359770 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:51.359776 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:51.359840 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:51.386446 1542350 cri.go:89] found id: ""
	I1213 16:13:51.386522 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.386544 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:51.386566 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:51.386589 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:51.412669 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:51.412707 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:51.453745 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:51.453775 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:51.511660 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:51.511698 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:51.527994 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:51.528025 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:51.595021 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:51.583262    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.584038    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.585894    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.586556    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.588263    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:51.583262    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.584038    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.585894    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.586556    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.588263    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:54.096158 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:54.107425 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:54.107512 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:54.138865 1542350 cri.go:89] found id: ""
	I1213 16:13:54.138891 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.138899 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:54.138905 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:54.138966 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:54.164096 1542350 cri.go:89] found id: ""
	I1213 16:13:54.164121 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.164130 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:54.164135 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:54.164195 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:54.193309 1542350 cri.go:89] found id: ""
	I1213 16:13:54.193335 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.193345 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:54.193352 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:54.193416 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:54.219468 1542350 cri.go:89] found id: ""
	I1213 16:13:54.219490 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.219499 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:54.219520 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:54.219589 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:54.244935 1542350 cri.go:89] found id: ""
	I1213 16:13:54.244962 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.244971 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:54.244977 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:54.245038 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:54.274445 1542350 cri.go:89] found id: ""
	I1213 16:13:54.274472 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.274481 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:54.274488 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:54.274554 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:54.304121 1542350 cri.go:89] found id: ""
	I1213 16:13:54.304146 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.304154 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:54.304160 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:54.304217 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:54.329301 1542350 cri.go:89] found id: ""
	I1213 16:13:54.329327 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.329335 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:54.329350 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:54.329362 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:54.357962 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:54.358003 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:54.393726 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:54.393753 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:54.454879 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:54.454917 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:54.471046 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:54.471122 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:54.539675 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:54.530749    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.531242    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.532726    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.533202    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.535706    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:54.530749    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.531242    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.532726    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.533202    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.535706    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:57.040543 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:57.051825 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:57.051902 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:57.080948 1542350 cri.go:89] found id: ""
	I1213 16:13:57.080975 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.080984 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:57.080990 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:57.081060 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:57.106564 1542350 cri.go:89] found id: ""
	I1213 16:13:57.106592 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.106602 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:57.106609 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:57.106674 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:57.132305 1542350 cri.go:89] found id: ""
	I1213 16:13:57.132332 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.132341 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:57.132347 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:57.132415 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:57.161893 1542350 cri.go:89] found id: ""
	I1213 16:13:57.161919 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.161928 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:57.161934 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:57.161996 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:57.187018 1542350 cri.go:89] found id: ""
	I1213 16:13:57.187042 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.187051 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:57.187057 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:57.187118 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:57.213450 1542350 cri.go:89] found id: ""
	I1213 16:13:57.213477 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.213486 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:57.213493 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:57.213598 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:57.239773 1542350 cri.go:89] found id: ""
	I1213 16:13:57.239799 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.239808 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:57.239814 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:57.239875 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:57.268874 1542350 cri.go:89] found id: ""
	I1213 16:13:57.268901 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.268910 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:57.268920 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:57.268932 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:57.325438 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:57.325478 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:57.345255 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:57.345288 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:57.419796 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:57.411216    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.412003    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.413617    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.414124    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.415814    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:57.411216    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.412003    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.413617    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.414124    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.415814    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:57.419818 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:57.419830 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:57.445711 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:57.445753 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:58.303454 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:13:58.370450 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:13:58.370563 1542350 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 16:13:58.485061 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:13:58.547882 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:13:58.547990 1542350 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 16:13:59.973778 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:59.984749 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:59.984822 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:00.047691 1542350 cri.go:89] found id: ""
	I1213 16:14:00.047719 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.047729 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:00.047735 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:00.047812 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:00.172004 1542350 cri.go:89] found id: ""
	I1213 16:14:00.172032 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.172042 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:00.172048 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:00.172124 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:00.225264 1542350 cri.go:89] found id: ""
	I1213 16:14:00.225417 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.225430 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:00.225441 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:00.225515 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:00.291798 1542350 cri.go:89] found id: ""
	I1213 16:14:00.291826 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.291837 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:00.291843 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:00.291915 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:00.322720 1542350 cri.go:89] found id: ""
	I1213 16:14:00.322775 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.322785 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:00.322802 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:00.322965 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:00.382229 1542350 cri.go:89] found id: ""
	I1213 16:14:00.382259 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.382268 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:00.382276 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:00.382353 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:00.428076 1542350 cri.go:89] found id: ""
	I1213 16:14:00.428104 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.428114 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:00.428122 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:00.428188 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:00.456283 1542350 cri.go:89] found id: ""
	I1213 16:14:00.456313 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.456322 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:00.456334 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:00.456347 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:00.487074 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:00.487103 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:00.543060 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:00.543096 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:00.559570 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:00.559599 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:00.643362 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:00.632447    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.633406    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.637299    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.637614    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.639111    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:00.632447    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.633406    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.637299    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.637614    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.639111    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:00.643385 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:00.643398 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:03.169712 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:03.180422 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:03.180498 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:03.204986 1542350 cri.go:89] found id: ""
	I1213 16:14:03.205052 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.205078 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:03.205091 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:03.205167 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:03.229548 1542350 cri.go:89] found id: ""
	I1213 16:14:03.229624 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.229648 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:03.229667 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:03.229759 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:03.255379 1542350 cri.go:89] found id: ""
	I1213 16:14:03.255401 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.255410 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:03.255415 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:03.255474 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:03.281492 1542350 cri.go:89] found id: ""
	I1213 16:14:03.281516 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.281526 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:03.281532 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:03.281594 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:03.309687 1542350 cri.go:89] found id: ""
	I1213 16:14:03.309709 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.309717 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:03.309723 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:03.309781 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:03.342064 1542350 cri.go:89] found id: ""
	I1213 16:14:03.342088 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.342097 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:03.342104 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:03.342166 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:03.374355 1542350 cri.go:89] found id: ""
	I1213 16:14:03.374427 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.374449 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:03.374468 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:03.374551 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:03.402300 1542350 cri.go:89] found id: ""
	I1213 16:14:03.402373 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.402397 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:03.402419 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:03.402454 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:03.419291 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:03.419341 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:03.488415 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:03.479265    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.480042    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.481961    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.482530    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.483778    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:03.479265    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.480042    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.481961    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.482530    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.483778    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:03.488438 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:03.488450 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:03.513548 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:03.513583 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:03.541410 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:03.541438 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:06.098537 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:06.109444 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:06.109517 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:06.135738 1542350 cri.go:89] found id: ""
	I1213 16:14:06.135763 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.135772 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:06.135778 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:06.135838 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:06.164881 1542350 cri.go:89] found id: ""
	I1213 16:14:06.164907 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.164915 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:06.164921 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:06.165006 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:06.190132 1542350 cri.go:89] found id: ""
	I1213 16:14:06.190157 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.190166 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:06.190172 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:06.190237 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:06.214554 1542350 cri.go:89] found id: ""
	I1213 16:14:06.214588 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.214603 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:06.214610 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:06.214678 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:06.239546 1542350 cri.go:89] found id: ""
	I1213 16:14:06.239573 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.239582 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:06.239588 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:06.239675 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:06.265195 1542350 cri.go:89] found id: ""
	I1213 16:14:06.265223 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.265231 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:06.265237 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:06.265308 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:06.289926 1542350 cri.go:89] found id: ""
	I1213 16:14:06.289960 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.289969 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:06.289991 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:06.290071 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:06.314603 1542350 cri.go:89] found id: ""
	I1213 16:14:06.314629 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.314637 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:06.314647 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:06.314683 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:06.371177 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:06.371258 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:06.393856 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:06.393930 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:06.459001 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:06.450168    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.450823    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.452646    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.453140    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.454779    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:06.450168    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.450823    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.452646    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.453140    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.454779    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:06.459025 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:06.459038 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:06.484151 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:06.484188 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:09.017168 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:09.028196 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:09.028273 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:09.056958 1542350 cri.go:89] found id: ""
	I1213 16:14:09.056983 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.056991 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:09.056997 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:09.057056 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:09.081528 1542350 cri.go:89] found id: ""
	I1213 16:14:09.081554 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.081562 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:09.081568 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:09.081625 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:09.106979 1542350 cri.go:89] found id: ""
	I1213 16:14:09.107006 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.107015 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:09.107022 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:09.107082 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:09.131992 1542350 cri.go:89] found id: ""
	I1213 16:14:09.132014 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.132022 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:09.132031 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:09.132090 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:09.159379 1542350 cri.go:89] found id: ""
	I1213 16:14:09.159403 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.159411 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:09.159417 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:09.159475 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:09.188125 1542350 cri.go:89] found id: ""
	I1213 16:14:09.188148 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.188157 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:09.188163 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:09.188223 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:09.213724 1542350 cri.go:89] found id: ""
	I1213 16:14:09.213746 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.213755 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:09.213762 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:09.213820 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:09.239228 1542350 cri.go:89] found id: ""
	I1213 16:14:09.239250 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.239258 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:09.239269 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:09.239280 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:09.264873 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:09.264908 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:09.297705 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:09.297733 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:09.356080 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:09.356130 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:09.376099 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:09.376130 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:09.447156 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:09.438767    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.439139    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.440687    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.441254    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.442932    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:09.438767    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.439139    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.440687    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.441254    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.442932    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:11.948214 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:11.961565 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:11.961686 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:11.989927 1542350 cri.go:89] found id: ""
	I1213 16:14:11.989978 1542350 logs.go:282] 0 containers: []
	W1213 16:14:11.989988 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:11.989997 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:11.990074 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:12.015827 1542350 cri.go:89] found id: ""
	I1213 16:14:12.015853 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.015863 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:12.015869 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:12.015931 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:12.043024 1542350 cri.go:89] found id: ""
	I1213 16:14:12.043052 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.043061 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:12.043067 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:12.043129 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:12.068348 1542350 cri.go:89] found id: ""
	I1213 16:14:12.068376 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.068385 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:12.068390 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:12.068450 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:12.097740 1542350 cri.go:89] found id: ""
	I1213 16:14:12.097774 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.097783 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:12.097790 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:12.097858 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:12.121723 1542350 cri.go:89] found id: ""
	I1213 16:14:12.121755 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.121764 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:12.121770 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:12.121842 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:12.150786 1542350 cri.go:89] found id: ""
	I1213 16:14:12.150813 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.150821 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:12.150827 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:12.150892 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:12.175342 1542350 cri.go:89] found id: ""
	I1213 16:14:12.175367 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.175376 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:12.175386 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:12.175404 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:12.231019 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:12.231066 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:12.247225 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:12.247257 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:12.311535 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:12.303778    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.304209    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.305700    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.306034    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.307505    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:12.303778    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.304209    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.305700    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.306034    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.307505    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:12.311562 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:12.311575 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:12.336385 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:12.336419 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:14.871456 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:14.883637 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:14.883706 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:14.912506 1542350 cri.go:89] found id: ""
	I1213 16:14:14.912530 1542350 logs.go:282] 0 containers: []
	W1213 16:14:14.912539 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:14.912545 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:14.912612 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:14.924965 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:14:14.948875 1542350 cri.go:89] found id: ""
	I1213 16:14:14.948908 1542350 logs.go:282] 0 containers: []
	W1213 16:14:14.948917 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:14.948923 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:14.948983 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	W1213 16:14:15.004427 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:14:15.004545 1542350 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 16:14:15.004879 1542350 cri.go:89] found id: ""
	I1213 16:14:15.004917 1542350 logs.go:282] 0 containers: []
	W1213 16:14:15.005050 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:15.005059 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:15.005129 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:15.016719 1542350 out.go:179] * Enabled addons: 
	I1213 16:14:15.019727 1542350 addons.go:530] duration metric: took 1m53.607875831s for enable addons: enabled=[]
	I1213 16:14:15.061323 1542350 cri.go:89] found id: ""
	I1213 16:14:15.061351 1542350 logs.go:282] 0 containers: []
	W1213 16:14:15.061359 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:15.061366 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:15.061431 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:15.089262 1542350 cri.go:89] found id: ""
	I1213 16:14:15.089290 1542350 logs.go:282] 0 containers: []
	W1213 16:14:15.089310 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:15.089351 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:15.089416 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:15.114964 1542350 cri.go:89] found id: ""
	I1213 16:14:15.114992 1542350 logs.go:282] 0 containers: []
	W1213 16:14:15.115001 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:15.115010 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:15.115087 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:15.150205 1542350 cri.go:89] found id: ""
	I1213 16:14:15.150228 1542350 logs.go:282] 0 containers: []
	W1213 16:14:15.150237 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:15.150243 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:15.150305 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:15.179096 1542350 cri.go:89] found id: ""
	I1213 16:14:15.179124 1542350 logs.go:282] 0 containers: []
	W1213 16:14:15.179159 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:15.179170 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:15.179186 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:15.240671 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:15.240716 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:15.257989 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:15.258020 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:15.327105 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:15.316562    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.317280    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.321065    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.321732    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.323049    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:15.316562    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.317280    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.321065    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.321732    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.323049    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:15.327125 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:15.327139 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:15.356556 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:15.356601 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:17.895435 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:17.906103 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:17.906178 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:17.934229 1542350 cri.go:89] found id: ""
	I1213 16:14:17.934255 1542350 logs.go:282] 0 containers: []
	W1213 16:14:17.934263 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:17.934270 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:17.934329 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:17.960923 1542350 cri.go:89] found id: ""
	I1213 16:14:17.960947 1542350 logs.go:282] 0 containers: []
	W1213 16:14:17.960955 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:17.960980 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:17.961039 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:17.986062 1542350 cri.go:89] found id: ""
	I1213 16:14:17.986096 1542350 logs.go:282] 0 containers: []
	W1213 16:14:17.986105 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:17.986111 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:17.986180 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:18.019636 1542350 cri.go:89] found id: ""
	I1213 16:14:18.019718 1542350 logs.go:282] 0 containers: []
	W1213 16:14:18.019741 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:18.019761 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:18.019858 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:18.046719 1542350 cri.go:89] found id: ""
	I1213 16:14:18.046787 1542350 logs.go:282] 0 containers: []
	W1213 16:14:18.046810 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:18.046829 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:18.046924 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:18.073562 1542350 cri.go:89] found id: ""
	I1213 16:14:18.073641 1542350 logs.go:282] 0 containers: []
	W1213 16:14:18.073665 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:18.073685 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:18.073763 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:18.100968 1542350 cri.go:89] found id: ""
	I1213 16:14:18.101005 1542350 logs.go:282] 0 containers: []
	W1213 16:14:18.101014 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:18.101021 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:18.101086 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:18.127366 1542350 cri.go:89] found id: ""
	I1213 16:14:18.127391 1542350 logs.go:282] 0 containers: []
	W1213 16:14:18.127401 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:18.127410 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:18.127422 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:18.160263 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:18.160289 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:18.217033 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:18.217066 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:18.234115 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:18.234146 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:18.301091 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:18.292902    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.293484    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.295231    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.295655    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.297297    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:18.292902    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.293484    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.295231    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.295655    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.297297    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:18.301112 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:18.301126 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:20.828738 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:20.843249 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:20.843356 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:20.878301 1542350 cri.go:89] found id: ""
	I1213 16:14:20.878326 1542350 logs.go:282] 0 containers: []
	W1213 16:14:20.878335 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:20.878341 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:20.878400 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:20.911841 1542350 cri.go:89] found id: ""
	I1213 16:14:20.911863 1542350 logs.go:282] 0 containers: []
	W1213 16:14:20.911872 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:20.911877 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:20.911937 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:20.938802 1542350 cri.go:89] found id: ""
	I1213 16:14:20.938825 1542350 logs.go:282] 0 containers: []
	W1213 16:14:20.938833 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:20.938839 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:20.938895 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:20.963358 1542350 cri.go:89] found id: ""
	I1213 16:14:20.963382 1542350 logs.go:282] 0 containers: []
	W1213 16:14:20.963395 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:20.963402 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:20.963462 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:20.988428 1542350 cri.go:89] found id: ""
	I1213 16:14:20.988500 1542350 logs.go:282] 0 containers: []
	W1213 16:14:20.988516 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:20.988523 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:20.988586 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:21.015053 1542350 cri.go:89] found id: ""
	I1213 16:14:21.015088 1542350 logs.go:282] 0 containers: []
	W1213 16:14:21.015097 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:21.015104 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:21.015168 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:21.041720 1542350 cri.go:89] found id: ""
	I1213 16:14:21.041747 1542350 logs.go:282] 0 containers: []
	W1213 16:14:21.041761 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:21.041767 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:21.041844 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:21.066333 1542350 cri.go:89] found id: ""
	I1213 16:14:21.066358 1542350 logs.go:282] 0 containers: []
	W1213 16:14:21.066367 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:21.066376 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:21.066390 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:21.092074 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:21.092113 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:21.119921 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:21.119949 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:21.175737 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:21.175772 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:21.192772 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:21.192802 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:21.258320 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:21.248712    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.249237    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.250941    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.251593    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.253749    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:21.248712    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.249237    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.250941    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.251593    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.253749    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:23.760202 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:23.770818 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:23.770889 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:23.797015 1542350 cri.go:89] found id: ""
	I1213 16:14:23.797038 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.797047 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:23.797053 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:23.797113 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:23.822062 1542350 cri.go:89] found id: ""
	I1213 16:14:23.822085 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.822093 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:23.822100 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:23.822158 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:23.874192 1542350 cri.go:89] found id: ""
	I1213 16:14:23.874214 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.874223 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:23.874229 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:23.874286 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:23.900200 1542350 cri.go:89] found id: ""
	I1213 16:14:23.900221 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.900230 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:23.900236 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:23.900296 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:23.926269 1542350 cri.go:89] found id: ""
	I1213 16:14:23.926298 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.926306 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:23.926313 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:23.926373 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:23.953863 1542350 cri.go:89] found id: ""
	I1213 16:14:23.953893 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.953902 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:23.953909 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:23.953978 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:23.978285 1542350 cri.go:89] found id: ""
	I1213 16:14:23.978314 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.978323 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:23.978332 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:23.978392 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:24.004367 1542350 cri.go:89] found id: ""
	I1213 16:14:24.004397 1542350 logs.go:282] 0 containers: []
	W1213 16:14:24.004407 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:24.004418 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:24.004433 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:24.038684 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:24.038715 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:24.093699 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:24.093736 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:24.109888 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:24.109958 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:24.176373 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:24.167217    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.168183    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.169904    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.170492    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.172210    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:24.167217    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.168183    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.169904    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.170492    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.172210    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:24.176410 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:24.176423 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:26.703702 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:26.715414 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:26.715505 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:26.741617 1542350 cri.go:89] found id: ""
	I1213 16:14:26.741644 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.741653 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:26.741660 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:26.741725 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:26.773142 1542350 cri.go:89] found id: ""
	I1213 16:14:26.773166 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.773175 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:26.773180 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:26.773248 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:26.800698 1542350 cri.go:89] found id: ""
	I1213 16:14:26.800770 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.800792 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:26.800812 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:26.800916 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:26.826188 1542350 cri.go:89] found id: ""
	I1213 16:14:26.826213 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.826222 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:26.826228 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:26.826290 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:26.858537 1542350 cri.go:89] found id: ""
	I1213 16:14:26.858564 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.858573 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:26.858579 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:26.858644 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:26.893373 1542350 cri.go:89] found id: ""
	I1213 16:14:26.893401 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.893411 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:26.893417 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:26.893491 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:26.924977 1542350 cri.go:89] found id: ""
	I1213 16:14:26.925004 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.925013 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:26.925019 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:26.925080 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:26.949933 1542350 cri.go:89] found id: ""
	I1213 16:14:26.949962 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.949971 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:26.949980 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:26.949997 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:26.980349 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:26.980380 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:27.038924 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:27.038960 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:27.055463 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:27.055494 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:27.125589 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:27.116928    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.117440    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.119173    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.119547    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.121058    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:27.116928    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.117440    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.119173    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.119547    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.121058    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:27.125608 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:27.125624 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:29.652560 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:29.663991 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:29.664080 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:29.692800 1542350 cri.go:89] found id: ""
	I1213 16:14:29.692825 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.692834 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:29.692841 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:29.692908 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:29.724553 1542350 cri.go:89] found id: ""
	I1213 16:14:29.724585 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.724595 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:29.724603 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:29.724665 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:29.750391 1542350 cri.go:89] found id: ""
	I1213 16:14:29.750460 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.750484 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:29.750502 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:29.750593 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:29.774900 1542350 cri.go:89] found id: ""
	I1213 16:14:29.774968 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.774994 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:29.775012 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:29.775104 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:29.800460 1542350 cri.go:89] found id: ""
	I1213 16:14:29.800503 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.800512 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:29.800518 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:29.800581 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:29.825184 1542350 cri.go:89] found id: ""
	I1213 16:14:29.825261 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.825285 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:29.825305 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:29.825391 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:29.857574 1542350 cri.go:89] found id: ""
	I1213 16:14:29.857604 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.857613 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:29.857619 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:29.857681 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:29.886573 1542350 cri.go:89] found id: ""
	I1213 16:14:29.886602 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.886610 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:29.886620 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:29.886636 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:29.954547 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:29.945729    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.946349    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.948212    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.948764    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.950488    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:29.945729    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.946349    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.948212    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.948764    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.950488    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:29.954614 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:29.954636 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:29.980281 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:29.980318 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:30.020553 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:30.020640 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:30.112248 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:30.112288 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:32.632543 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:32.644615 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:32.644739 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:32.671076 1542350 cri.go:89] found id: ""
	I1213 16:14:32.671103 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.671115 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:32.671124 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:32.671204 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:32.705219 1542350 cri.go:89] found id: ""
	I1213 16:14:32.705245 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.705255 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:32.705264 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:32.705345 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:32.734663 1542350 cri.go:89] found id: ""
	I1213 16:14:32.734764 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.734796 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:32.734826 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:32.734911 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:32.763416 1542350 cri.go:89] found id: ""
	I1213 16:14:32.763441 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.763451 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:32.763457 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:32.763519 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:32.790404 1542350 cri.go:89] found id: ""
	I1213 16:14:32.790478 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.790500 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:32.790519 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:32.790638 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:32.818613 1542350 cri.go:89] found id: ""
	I1213 16:14:32.818699 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.818735 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:32.818773 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:32.818908 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:32.850999 1542350 cri.go:89] found id: ""
	I1213 16:14:32.851029 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.851038 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:32.851050 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:32.851113 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:32.883800 1542350 cri.go:89] found id: ""
	I1213 16:14:32.883828 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.883837 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:32.883846 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:32.883857 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:32.950061 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:32.950111 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:32.967586 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:32.967617 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:33.038320 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:33.029200    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.030090    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.031863    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.032322    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.033913    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:33.029200    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.030090    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.031863    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.032322    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.033913    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:33.038342 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:33.038357 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:33.066098 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:33.066154 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:35.607481 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:35.619526 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:35.619589 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:35.646097 1542350 cri.go:89] found id: ""
	I1213 16:14:35.646120 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.646131 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:35.646137 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:35.646197 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:35.671288 1542350 cri.go:89] found id: ""
	I1213 16:14:35.671349 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.671358 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:35.671364 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:35.671428 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:35.696891 1542350 cri.go:89] found id: ""
	I1213 16:14:35.696915 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.696923 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:35.696930 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:35.696990 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:35.722027 1542350 cri.go:89] found id: ""
	I1213 16:14:35.722049 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.722057 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:35.722063 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:35.722120 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:35.746428 1542350 cri.go:89] found id: ""
	I1213 16:14:35.746450 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.746458 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:35.746465 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:35.746521 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:35.771433 1542350 cri.go:89] found id: ""
	I1213 16:14:35.771456 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.771465 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:35.771471 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:35.771527 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:35.795226 1542350 cri.go:89] found id: ""
	I1213 16:14:35.795292 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.795408 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:35.795422 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:35.795494 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:35.819205 1542350 cri.go:89] found id: ""
	I1213 16:14:35.819237 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.819246 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:35.819256 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:35.819268 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:35.856667 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:35.856698 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:35.921282 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:35.921317 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:35.937351 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:35.937379 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:36.013024 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:35.998154    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:35.998523    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:36.000027    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:36.000325    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:36.001562    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:35.998154    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:35.998523    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:36.000027    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:36.000325    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:36.001562    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:36.013050 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:36.013065 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:38.540010 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:38.553894 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:38.553969 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:38.587080 1542350 cri.go:89] found id: ""
	I1213 16:14:38.587102 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.587110 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:38.587116 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:38.587180 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:38.615796 1542350 cri.go:89] found id: ""
	I1213 16:14:38.615820 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.615829 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:38.615835 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:38.615895 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:38.652609 1542350 cri.go:89] found id: ""
	I1213 16:14:38.652634 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.652643 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:38.652649 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:38.652706 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:38.681712 1542350 cri.go:89] found id: ""
	I1213 16:14:38.681738 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.681747 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:38.681753 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:38.681812 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:38.707047 1542350 cri.go:89] found id: ""
	I1213 16:14:38.707076 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.707085 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:38.707091 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:38.707154 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:38.731834 1542350 cri.go:89] found id: ""
	I1213 16:14:38.731868 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.731878 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:38.731884 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:38.731951 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:38.755752 1542350 cri.go:89] found id: ""
	I1213 16:14:38.755816 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.755838 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:38.755855 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:38.755940 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:38.780290 1542350 cri.go:89] found id: ""
	I1213 16:14:38.780316 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.780325 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:38.780335 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:38.780354 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:38.837581 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:38.837613 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:38.855100 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:38.855130 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:38.927088 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:38.917976    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.918723    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.920509    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.921087    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.922821    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:38.917976    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.918723    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.920509    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.921087    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.922821    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:38.927155 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:38.927178 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:38.952089 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:38.952127 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:41.483644 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:41.494493 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:41.494574 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:41.518966 1542350 cri.go:89] found id: ""
	I1213 16:14:41.518988 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.518996 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:41.519002 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:41.519066 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:41.545695 1542350 cri.go:89] found id: ""
	I1213 16:14:41.545720 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.545729 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:41.545734 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:41.545798 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:41.571565 1542350 cri.go:89] found id: ""
	I1213 16:14:41.571591 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.571600 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:41.571606 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:41.571673 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:41.619450 1542350 cri.go:89] found id: ""
	I1213 16:14:41.619473 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.619482 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:41.619488 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:41.619548 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:41.653736 1542350 cri.go:89] found id: ""
	I1213 16:14:41.653757 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.653766 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:41.653773 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:41.653835 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:41.682235 1542350 cri.go:89] found id: ""
	I1213 16:14:41.682257 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.682265 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:41.682272 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:41.682332 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:41.708453 1542350 cri.go:89] found id: ""
	I1213 16:14:41.708475 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.708489 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:41.708496 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:41.708554 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:41.737148 1542350 cri.go:89] found id: ""
	I1213 16:14:41.737171 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.737179 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:41.737193 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:41.737205 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:41.792082 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:41.792120 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:41.808566 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:41.808597 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:41.888202 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:41.877628    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.878620    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.880407    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.881012    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.883935    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:41.877628    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.878620    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.880407    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.881012    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.883935    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:41.888226 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:41.888238 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:41.913429 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:41.913466 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:44.445881 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:44.456550 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:44.456627 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:44.482008 1542350 cri.go:89] found id: ""
	I1213 16:14:44.482031 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.482039 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:44.482045 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:44.482103 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:44.507630 1542350 cri.go:89] found id: ""
	I1213 16:14:44.507654 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.507662 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:44.507668 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:44.507729 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:44.536680 1542350 cri.go:89] found id: ""
	I1213 16:14:44.536704 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.536713 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:44.536719 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:44.536778 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:44.565166 1542350 cri.go:89] found id: ""
	I1213 16:14:44.565189 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.565199 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:44.565205 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:44.565265 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:44.598174 1542350 cri.go:89] found id: ""
	I1213 16:14:44.598197 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.598206 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:44.598214 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:44.598280 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:44.640061 1542350 cri.go:89] found id: ""
	I1213 16:14:44.640084 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.640092 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:44.640099 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:44.640159 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:44.671940 1542350 cri.go:89] found id: ""
	I1213 16:14:44.671968 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.671976 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:44.671982 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:44.672044 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:44.698885 1542350 cri.go:89] found id: ""
	I1213 16:14:44.698907 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.698916 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:44.698925 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:44.698939 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:44.715019 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:44.715090 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:44.777959 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:44.769893    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.770508    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.772034    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.772476    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.773993    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:44.769893    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.770508    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.772034    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.772476    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.773993    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:44.777983 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:44.777996 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:44.803994 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:44.804031 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:44.835446 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:44.835476 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:47.402282 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:47.413184 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:47.413252 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:47.439678 1542350 cri.go:89] found id: ""
	I1213 16:14:47.439702 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.439710 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:47.439717 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:47.439777 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:47.469694 1542350 cri.go:89] found id: ""
	I1213 16:14:47.469720 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.469728 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:47.469734 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:47.469797 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:47.495280 1542350 cri.go:89] found id: ""
	I1213 16:14:47.495306 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.495339 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:47.495346 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:47.495408 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:47.525092 1542350 cri.go:89] found id: ""
	I1213 16:14:47.525118 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.525127 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:47.525133 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:47.525194 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:47.551755 1542350 cri.go:89] found id: ""
	I1213 16:14:47.551782 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.551790 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:47.551797 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:47.551858 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:47.577368 1542350 cri.go:89] found id: ""
	I1213 16:14:47.577393 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.577402 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:47.577408 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:47.577479 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:47.603993 1542350 cri.go:89] found id: ""
	I1213 16:14:47.604016 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.604024 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:47.604030 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:47.604095 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:47.634166 1542350 cri.go:89] found id: ""
	I1213 16:14:47.634188 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.634197 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:47.634206 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:47.634217 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:47.698875 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:47.698911 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:47.715548 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:47.715580 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:47.783485 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:47.774772    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.775432    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.777207    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.777881    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.779547    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:47.774772    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.775432    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.777207    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.777881    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.779547    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:47.783508 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:47.783521 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:47.809639 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:47.809672 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:50.342353 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:50.355175 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:50.355303 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:50.381034 1542350 cri.go:89] found id: ""
	I1213 16:14:50.381066 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.381076 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:50.381084 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:50.381166 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:50.409181 1542350 cri.go:89] found id: ""
	I1213 16:14:50.409208 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.409217 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:50.409222 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:50.409286 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:50.438419 1542350 cri.go:89] found id: ""
	I1213 16:14:50.438451 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.438460 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:50.438466 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:50.438525 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:50.468687 1542350 cri.go:89] found id: ""
	I1213 16:14:50.468713 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.468721 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:50.468728 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:50.468789 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:50.498096 1542350 cri.go:89] found id: ""
	I1213 16:14:50.498163 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.498187 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:50.498205 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:50.498292 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:50.523754 1542350 cri.go:89] found id: ""
	I1213 16:14:50.523820 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.523835 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:50.523843 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:50.523902 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:50.555302 1542350 cri.go:89] found id: ""
	I1213 16:14:50.555387 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.555403 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:50.555410 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:50.555477 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:50.581005 1542350 cri.go:89] found id: ""
	I1213 16:14:50.581035 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.581044 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:50.581054 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:50.581067 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:50.611931 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:50.612005 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:50.650728 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:50.650754 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:50.709840 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:50.709878 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:50.729613 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:50.729711 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:50.796424 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:50.788003    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.788585    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.790191    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.790714    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.792323    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:50.788003    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.788585    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.790191    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.790714    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.792323    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:53.298328 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:53.309106 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:53.309178 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:53.333481 1542350 cri.go:89] found id: ""
	I1213 16:14:53.333513 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.333523 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:53.333529 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:53.333590 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:53.358898 1542350 cri.go:89] found id: ""
	I1213 16:14:53.358923 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.358932 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:53.358938 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:53.358999 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:53.384286 1542350 cri.go:89] found id: ""
	I1213 16:14:53.384311 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.384322 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:53.384329 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:53.384388 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:53.408999 1542350 cri.go:89] found id: ""
	I1213 16:14:53.409022 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.409031 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:53.409037 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:53.409102 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:53.437666 1542350 cri.go:89] found id: ""
	I1213 16:14:53.437688 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.437696 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:53.437703 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:53.437764 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:53.462775 1542350 cri.go:89] found id: ""
	I1213 16:14:53.462868 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.462885 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:53.462893 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:53.462955 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:53.489379 1542350 cri.go:89] found id: ""
	I1213 16:14:53.489403 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.489413 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:53.489419 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:53.489479 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:53.513660 1542350 cri.go:89] found id: ""
	I1213 16:14:53.513683 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.513691 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:53.513701 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:53.513711 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:53.544644 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:53.544670 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:53.603653 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:53.603733 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:53.620761 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:53.620846 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:53.694809 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:53.685787    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.686311    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.688155    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.688956    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.690579    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:53.685787    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.686311    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.688155    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.688956    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.690579    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:53.694871 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:53.694886 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:56.222442 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:56.233418 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:56.233521 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:56.262552 1542350 cri.go:89] found id: ""
	I1213 16:14:56.262578 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.262587 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:56.262594 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:56.262677 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:56.290583 1542350 cri.go:89] found id: ""
	I1213 16:14:56.290611 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.290620 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:56.290627 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:56.290778 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:56.316264 1542350 cri.go:89] found id: ""
	I1213 16:14:56.316292 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.316300 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:56.316306 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:56.316366 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:56.341047 1542350 cri.go:89] found id: ""
	I1213 16:14:56.341072 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.341080 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:56.341086 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:56.341163 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:56.369874 1542350 cri.go:89] found id: ""
	I1213 16:14:56.369909 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.369918 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:56.369924 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:56.369993 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:56.396373 1542350 cri.go:89] found id: ""
	I1213 16:14:56.396400 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.396408 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:56.396415 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:56.396480 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:56.421264 1542350 cri.go:89] found id: ""
	I1213 16:14:56.421286 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.421294 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:56.421300 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:56.421362 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:56.449683 1542350 cri.go:89] found id: ""
	I1213 16:14:56.449708 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.449717 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:56.449727 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:56.449740 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:56.513612 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:56.505286    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.506108    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.507846    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.508199    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.509740    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:56.505286    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.506108    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.507846    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.508199    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.509740    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:56.513635 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:56.513648 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:56.539159 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:56.539193 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:56.569885 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:56.569913 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:56.636667 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:56.636712 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:59.161215 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:59.172070 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:59.172139 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:59.196977 1542350 cri.go:89] found id: ""
	I1213 16:14:59.197003 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.197013 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:59.197019 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:59.197124 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:59.222813 1542350 cri.go:89] found id: ""
	I1213 16:14:59.222839 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.222849 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:59.222855 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:59.222921 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:59.249285 1542350 cri.go:89] found id: ""
	I1213 16:14:59.249309 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.249317 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:59.249323 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:59.249385 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:59.275052 1542350 cri.go:89] found id: ""
	I1213 16:14:59.275076 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.275085 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:59.275091 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:59.275152 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:59.301297 1542350 cri.go:89] found id: ""
	I1213 16:14:59.301323 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.301331 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:59.301337 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:59.301395 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:59.326556 1542350 cri.go:89] found id: ""
	I1213 16:14:59.326582 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.326591 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:59.326599 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:59.326658 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:59.360044 1542350 cri.go:89] found id: ""
	I1213 16:14:59.360070 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.360079 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:59.360085 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:59.360145 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:59.385355 1542350 cri.go:89] found id: ""
	I1213 16:14:59.385380 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.385389 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:59.385398 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:59.385410 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:59.441005 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:59.441040 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:59.456936 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:59.456968 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:59.523389 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:59.514843    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.515544    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.517163    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.517739    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.519447    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:59.514843    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.515544    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.517163    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.517739    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.519447    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:59.523410 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:59.523423 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:59.548680 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:59.548717 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:02.077266 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:02.091997 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:02.092082 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:02.125051 1542350 cri.go:89] found id: ""
	I1213 16:15:02.125079 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.125088 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:02.125095 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:02.125158 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:02.155518 1542350 cri.go:89] found id: ""
	I1213 16:15:02.155547 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.155555 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:02.155567 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:02.155626 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:02.180408 1542350 cri.go:89] found id: ""
	I1213 16:15:02.180435 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.180444 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:02.180450 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:02.180541 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:02.206923 1542350 cri.go:89] found id: ""
	I1213 16:15:02.206957 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.206966 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:02.206979 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:02.207049 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:02.234308 1542350 cri.go:89] found id: ""
	I1213 16:15:02.234332 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.234341 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:02.234347 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:02.234412 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:02.260647 1542350 cri.go:89] found id: ""
	I1213 16:15:02.260671 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.260680 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:02.260686 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:02.260746 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:02.287051 1542350 cri.go:89] found id: ""
	I1213 16:15:02.287075 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.287083 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:02.287089 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:02.287151 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:02.313703 1542350 cri.go:89] found id: ""
	I1213 16:15:02.313726 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.313734 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:02.313744 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:02.313755 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:02.369628 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:02.369663 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:02.385814 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:02.385896 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:02.450440 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:02.441569    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.442433    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.443449    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.445136    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.445445    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:02.441569    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.442433    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.443449    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.445136    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.445445    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:02.450460 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:02.450475 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:02.475994 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:02.476032 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:05.008952 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:05.023767 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:05.023852 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:05.048943 1542350 cri.go:89] found id: ""
	I1213 16:15:05.048970 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.048979 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:05.048985 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:05.049046 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:05.073030 1542350 cri.go:89] found id: ""
	I1213 16:15:05.073057 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.073066 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:05.073072 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:05.073141 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:05.113695 1542350 cri.go:89] found id: ""
	I1213 16:15:05.113724 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.113733 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:05.113739 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:05.113798 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:05.143435 1542350 cri.go:89] found id: ""
	I1213 16:15:05.143462 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.143471 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:05.143476 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:05.143533 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:05.169643 1542350 cri.go:89] found id: ""
	I1213 16:15:05.169672 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.169682 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:05.169694 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:05.169756 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:05.194836 1542350 cri.go:89] found id: ""
	I1213 16:15:05.194865 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.194874 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:05.194881 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:05.194939 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:05.223183 1542350 cri.go:89] found id: ""
	I1213 16:15:05.223208 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.223216 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:05.223223 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:05.223284 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:05.247344 1542350 cri.go:89] found id: ""
	I1213 16:15:05.247368 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.247377 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:05.247386 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:05.247400 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:05.302110 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:05.302144 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:05.318507 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:05.318537 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:05.383855 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:05.375536    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.376623    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.377525    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.378529    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.380053    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:05.375536    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.376623    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.377525    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.378529    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.380053    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:05.383878 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:05.383891 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:05.408947 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:05.408984 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:07.939749 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:07.950076 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:07.950150 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:07.975327 1542350 cri.go:89] found id: ""
	I1213 16:15:07.975351 1542350 logs.go:282] 0 containers: []
	W1213 16:15:07.975360 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:07.975366 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:07.975423 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:07.999830 1542350 cri.go:89] found id: ""
	I1213 16:15:07.999856 1542350 logs.go:282] 0 containers: []
	W1213 16:15:07.999864 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:07.999870 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:07.999928 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:08.026521 1542350 cri.go:89] found id: ""
	I1213 16:15:08.026547 1542350 logs.go:282] 0 containers: []
	W1213 16:15:08.026556 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:08.026562 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:08.026627 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:08.053320 1542350 cri.go:89] found id: ""
	I1213 16:15:08.053343 1542350 logs.go:282] 0 containers: []
	W1213 16:15:08.053352 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:08.053358 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:08.053418 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:08.084631 1542350 cri.go:89] found id: ""
	I1213 16:15:08.084654 1542350 logs.go:282] 0 containers: []
	W1213 16:15:08.084663 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:08.084669 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:08.084727 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:08.115761 1542350 cri.go:89] found id: ""
	I1213 16:15:08.115842 1542350 logs.go:282] 0 containers: []
	W1213 16:15:08.115866 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:08.115884 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:08.115992 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:08.143108 1542350 cri.go:89] found id: ""
	I1213 16:15:08.143131 1542350 logs.go:282] 0 containers: []
	W1213 16:15:08.143141 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:08.143150 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:08.143210 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:08.169485 1542350 cri.go:89] found id: ""
	I1213 16:15:08.169548 1542350 logs.go:282] 0 containers: []
	W1213 16:15:08.169571 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:08.169593 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:08.169632 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:08.186535 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:08.186608 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:08.254187 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:08.245289    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.245832    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.247630    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.248117    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.249849    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:08.245289    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.245832    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.247630    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.248117    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.249849    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:08.254252 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:08.254277 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:08.279498 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:08.279538 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:08.307012 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:08.307040 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:10.863431 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:10.875836 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:10.875902 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:10.902828 1542350 cri.go:89] found id: ""
	I1213 16:15:10.902850 1542350 logs.go:282] 0 containers: []
	W1213 16:15:10.902859 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:10.902864 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:10.902924 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:10.927709 1542350 cri.go:89] found id: ""
	I1213 16:15:10.927732 1542350 logs.go:282] 0 containers: []
	W1213 16:15:10.927741 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:10.927747 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:10.927807 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:10.952424 1542350 cri.go:89] found id: ""
	I1213 16:15:10.952448 1542350 logs.go:282] 0 containers: []
	W1213 16:15:10.952457 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:10.952466 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:10.952544 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:10.977056 1542350 cri.go:89] found id: ""
	I1213 16:15:10.977087 1542350 logs.go:282] 0 containers: []
	W1213 16:15:10.977095 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:10.977101 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:10.977163 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:11.006742 1542350 cri.go:89] found id: ""
	I1213 16:15:11.006767 1542350 logs.go:282] 0 containers: []
	W1213 16:15:11.006776 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:11.006782 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:11.006857 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:11.033448 1542350 cri.go:89] found id: ""
	I1213 16:15:11.033471 1542350 logs.go:282] 0 containers: []
	W1213 16:15:11.033481 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:11.033491 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:11.033549 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:11.058288 1542350 cri.go:89] found id: ""
	I1213 16:15:11.058319 1542350 logs.go:282] 0 containers: []
	W1213 16:15:11.058329 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:11.058335 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:11.058403 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:11.086206 1542350 cri.go:89] found id: ""
	I1213 16:15:11.086229 1542350 logs.go:282] 0 containers: []
	W1213 16:15:11.086238 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:11.086248 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:11.086260 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:11.149204 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:11.149250 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:11.169208 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:11.169240 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:11.239824 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:11.230926    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.231789    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.233414    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.233896    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.235615    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:11.230926    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.231789    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.233414    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.233896    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.235615    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:11.239888 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:11.239913 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:11.265156 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:11.265190 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:13.793650 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:13.804879 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:13.804957 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:13.830496 1542350 cri.go:89] found id: ""
	I1213 16:15:13.830524 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.830534 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:13.830541 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:13.830598 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:13.860289 1542350 cri.go:89] found id: ""
	I1213 16:15:13.860316 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.860325 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:13.860331 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:13.860404 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:13.889862 1542350 cri.go:89] found id: ""
	I1213 16:15:13.889900 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.889909 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:13.889915 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:13.889982 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:13.917096 1542350 cri.go:89] found id: ""
	I1213 16:15:13.917119 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.917127 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:13.917134 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:13.917192 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:13.941374 1542350 cri.go:89] found id: ""
	I1213 16:15:13.941397 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.941406 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:13.941412 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:13.941472 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:13.966429 1542350 cri.go:89] found id: ""
	I1213 16:15:13.966457 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.966467 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:13.966474 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:13.966536 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:13.992124 1542350 cri.go:89] found id: ""
	I1213 16:15:13.992193 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.992217 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:13.992231 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:13.992304 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:14.018581 1542350 cri.go:89] found id: ""
	I1213 16:15:14.018613 1542350 logs.go:282] 0 containers: []
	W1213 16:15:14.018621 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:14.018631 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:14.018643 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:14.076560 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:14.076594 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:14.093391 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:14.093470 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:14.169809 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:14.161020    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.162081    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.163786    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.164233    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.165949    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:14.161020    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.162081    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.163786    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.164233    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.165949    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:14.169831 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:14.169844 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:14.196553 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:14.196588 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:16.730383 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:16.741020 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:16.741091 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:16.765402 1542350 cri.go:89] found id: ""
	I1213 16:15:16.765425 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.765434 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:16.765440 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:16.765498 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:16.791004 1542350 cri.go:89] found id: ""
	I1213 16:15:16.791033 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.791042 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:16.791048 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:16.791112 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:16.816897 1542350 cri.go:89] found id: ""
	I1213 16:15:16.816925 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.816933 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:16.816939 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:16.817002 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:16.861774 1542350 cri.go:89] found id: ""
	I1213 16:15:16.861796 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.861803 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:16.861809 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:16.861868 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:16.895555 1542350 cri.go:89] found id: ""
	I1213 16:15:16.895575 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.895584 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:16.895589 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:16.895650 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:16.923607 1542350 cri.go:89] found id: ""
	I1213 16:15:16.923630 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.923638 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:16.923644 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:16.923705 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:16.952569 1542350 cri.go:89] found id: ""
	I1213 16:15:16.952602 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.952612 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:16.952618 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:16.952681 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:16.982597 1542350 cri.go:89] found id: ""
	I1213 16:15:16.982625 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.982634 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:16.982644 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:16.982657 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:17.040379 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:17.040417 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:17.056673 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:17.056703 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:17.155960 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:17.135813    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.147685    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.148473    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.150347    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.150872    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:17.135813    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.147685    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.148473    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.150347    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.150872    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:17.155984 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:17.155997 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:17.181703 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:17.181742 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:19.710412 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:19.723576 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:19.723654 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:19.752079 1542350 cri.go:89] found id: ""
	I1213 16:15:19.752102 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.752111 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:19.752117 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:19.752198 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:19.776763 1542350 cri.go:89] found id: ""
	I1213 16:15:19.776829 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.776845 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:19.776853 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:19.776912 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:19.803069 1542350 cri.go:89] found id: ""
	I1213 16:15:19.803133 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.803149 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:19.803157 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:19.803216 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:19.828299 1542350 cri.go:89] found id: ""
	I1213 16:15:19.828332 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.828342 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:19.828348 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:19.828419 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:19.858915 1542350 cri.go:89] found id: ""
	I1213 16:15:19.858992 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.859013 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:19.859032 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:19.859127 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:19.889950 1542350 cri.go:89] found id: ""
	I1213 16:15:19.889987 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.889996 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:19.890003 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:19.890076 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:19.915855 1542350 cri.go:89] found id: ""
	I1213 16:15:19.915879 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.915893 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:19.915899 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:19.915958 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:19.945371 1542350 cri.go:89] found id: ""
	I1213 16:15:19.945409 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.945418 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:19.945460 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:19.945484 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:20.004545 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:20.004586 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:20.030075 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:20.030110 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:20.119134 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:20.105779    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.107335    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.108625    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.110278    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.111655    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:20.105779    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.107335    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.108625    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.110278    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.111655    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:20.119228 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:20.119426 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:20.157972 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:20.158017 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:22.690836 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:22.701577 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:22.701651 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:22.725883 1542350 cri.go:89] found id: ""
	I1213 16:15:22.725908 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.725917 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:22.725922 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:22.725980 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:22.750347 1542350 cri.go:89] found id: ""
	I1213 16:15:22.750373 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.750382 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:22.750388 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:22.750446 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:22.773604 1542350 cri.go:89] found id: ""
	I1213 16:15:22.773627 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.773636 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:22.773642 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:22.773699 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:22.798122 1542350 cri.go:89] found id: ""
	I1213 16:15:22.798144 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.798153 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:22.798159 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:22.798216 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:22.825364 1542350 cri.go:89] found id: ""
	I1213 16:15:22.825386 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.825394 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:22.825400 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:22.825463 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:22.860458 1542350 cri.go:89] found id: ""
	I1213 16:15:22.860480 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.860489 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:22.860503 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:22.860560 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:22.888782 1542350 cri.go:89] found id: ""
	I1213 16:15:22.888865 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.888889 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:22.888907 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:22.888991 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:22.917264 1542350 cri.go:89] found id: ""
	I1213 16:15:22.917288 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.917297 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:22.917306 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:22.917318 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:22.947808 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:22.947850 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:23.002868 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:23.002910 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:23.019957 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:23.019988 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:23.095906 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:23.076845    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.077548    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.079269    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.079937    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.083952    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:23.076845    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.077548    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.079269    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.079937    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.083952    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:23.095985 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:23.096017 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:25.625418 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:25.636179 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:25.636256 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:25.660796 1542350 cri.go:89] found id: ""
	I1213 16:15:25.660819 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.660827 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:25.660833 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:25.660890 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:25.692137 1542350 cri.go:89] found id: ""
	I1213 16:15:25.692161 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.692169 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:25.692175 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:25.692234 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:25.722645 1542350 cri.go:89] found id: ""
	I1213 16:15:25.722667 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.722677 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:25.722683 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:25.722741 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:25.746597 1542350 cri.go:89] found id: ""
	I1213 16:15:25.746619 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.746627 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:25.746633 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:25.746690 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:25.773364 1542350 cri.go:89] found id: ""
	I1213 16:15:25.773391 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.773399 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:25.773405 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:25.773464 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:25.798024 1542350 cri.go:89] found id: ""
	I1213 16:15:25.798047 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.798056 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:25.798062 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:25.798140 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:25.824949 1542350 cri.go:89] found id: ""
	I1213 16:15:25.824975 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.824984 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:25.824989 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:25.825065 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:25.851736 1542350 cri.go:89] found id: ""
	I1213 16:15:25.851809 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.851843 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:25.851869 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:25.851910 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:25.868875 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:25.868902 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:25.941457 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:25.933124    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.933785    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.935483    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.935919    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.937566    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:25.933124    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.933785    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.935483    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.935919    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.937566    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:25.941527 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:25.941548 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:25.966625 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:25.966656 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:25.996976 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:25.997004 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:28.556122 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:28.567257 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:28.567352 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:28.592087 1542350 cri.go:89] found id: ""
	I1213 16:15:28.592153 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.592179 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:28.592196 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:28.592293 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:28.616658 1542350 cri.go:89] found id: ""
	I1213 16:15:28.616731 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.616746 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:28.616753 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:28.616822 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:28.640310 1542350 cri.go:89] found id: ""
	I1213 16:15:28.640335 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.640344 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:28.640349 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:28.640412 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:28.665406 1542350 cri.go:89] found id: ""
	I1213 16:15:28.665433 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.665443 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:28.665449 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:28.665508 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:28.690028 1542350 cri.go:89] found id: ""
	I1213 16:15:28.690090 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.690121 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:28.690143 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:28.690247 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:28.714656 1542350 cri.go:89] found id: ""
	I1213 16:15:28.714719 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.714753 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:28.714775 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:28.714862 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:28.741721 1542350 cri.go:89] found id: ""
	I1213 16:15:28.741745 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.741753 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:28.741759 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:28.741860 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:28.770039 1542350 cri.go:89] found id: ""
	I1213 16:15:28.770106 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.770132 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:28.770153 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:28.770191 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:28.794482 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:28.794514 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:28.825722 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:28.825751 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:28.885792 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:28.885826 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:28.902629 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:28.902658 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:28.968699 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:28.960883    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.961407    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.962907    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.963230    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.964863    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:28.960883    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.961407    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.962907    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.963230    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.964863    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:31.469803 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:31.480479 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:31.480600 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:31.512783 1542350 cri.go:89] found id: ""
	I1213 16:15:31.512807 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.512816 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:31.512823 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:31.512881 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:31.539773 1542350 cri.go:89] found id: ""
	I1213 16:15:31.539800 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.539815 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:31.539836 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:31.539915 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:31.564690 1542350 cri.go:89] found id: ""
	I1213 16:15:31.564715 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.564723 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:31.564729 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:31.564791 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:31.589449 1542350 cri.go:89] found id: ""
	I1213 16:15:31.589476 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.589484 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:31.589490 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:31.589550 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:31.614171 1542350 cri.go:89] found id: ""
	I1213 16:15:31.614203 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.614212 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:31.614218 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:31.614278 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:31.641466 1542350 cri.go:89] found id: ""
	I1213 16:15:31.641489 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.641498 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:31.641505 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:31.641563 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:31.665618 1542350 cri.go:89] found id: ""
	I1213 16:15:31.665641 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.665649 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:31.665656 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:31.665715 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:31.694436 1542350 cri.go:89] found id: ""
	I1213 16:15:31.694531 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.694554 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:31.694589 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:31.694621 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:31.720014 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:31.720047 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:31.746773 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:31.746844 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:31.802034 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:31.802070 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:31.819067 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:31.819096 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:31.926406 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:31.917414    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.918134    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.919118    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.920759    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.921377    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:31.917414    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.918134    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.919118    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.920759    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.921377    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:34.427501 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:34.438467 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:34.438539 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:34.469663 1542350 cri.go:89] found id: ""
	I1213 16:15:34.469685 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.469693 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:34.469699 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:34.469763 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:34.497352 1542350 cri.go:89] found id: ""
	I1213 16:15:34.497375 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.497384 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:34.497391 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:34.497449 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:34.522437 1542350 cri.go:89] found id: ""
	I1213 16:15:34.522462 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.522471 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:34.522477 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:34.522533 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:34.546310 1542350 cri.go:89] found id: ""
	I1213 16:15:34.546335 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.546344 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:34.546350 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:34.546410 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:34.570057 1542350 cri.go:89] found id: ""
	I1213 16:15:34.570082 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.570091 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:34.570097 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:34.570154 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:34.597335 1542350 cri.go:89] found id: ""
	I1213 16:15:34.597360 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.597369 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:34.597375 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:34.597438 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:34.622402 1542350 cri.go:89] found id: ""
	I1213 16:15:34.622426 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.622435 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:34.622441 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:34.622501 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:34.647379 1542350 cri.go:89] found id: ""
	I1213 16:15:34.647405 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.647414 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:34.647423 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:34.647435 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:34.707433 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:34.699526    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.700203    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.701513    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.701944    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.703518    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:34.699526    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.700203    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.701513    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.701944    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.703518    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:34.707452 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:34.707464 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:34.732617 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:34.732650 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:34.760551 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:34.760579 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:34.817043 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:34.817078 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:37.335446 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:37.346358 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:37.346480 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:37.375693 1542350 cri.go:89] found id: ""
	I1213 16:15:37.375763 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.375784 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:37.375803 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:37.375896 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:37.401729 1542350 cri.go:89] found id: ""
	I1213 16:15:37.401753 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.401761 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:37.401768 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:37.401832 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:37.426557 1542350 cri.go:89] found id: ""
	I1213 16:15:37.426583 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.426591 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:37.426597 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:37.426659 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:37.452633 1542350 cri.go:89] found id: ""
	I1213 16:15:37.452658 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.452666 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:37.452672 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:37.452731 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:37.476262 1542350 cri.go:89] found id: ""
	I1213 16:15:37.476287 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.476296 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:37.476302 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:37.476388 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:37.501165 1542350 cri.go:89] found id: ""
	I1213 16:15:37.501190 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.501198 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:37.501204 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:37.501285 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:37.524960 1542350 cri.go:89] found id: ""
	I1213 16:15:37.524983 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.524991 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:37.524997 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:37.525055 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:37.550053 1542350 cri.go:89] found id: ""
	I1213 16:15:37.550079 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.550088 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:37.550097 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:37.550109 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:37.613799 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:37.604980    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.605842    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.607596    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.608285    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.609930    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:37.604980    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.605842    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.607596    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.608285    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.609930    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:37.613824 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:37.613837 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:37.638525 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:37.638559 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:37.665937 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:37.665965 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:37.722593 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:37.722628 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:40.238420 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:40.249230 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:40.249314 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:40.273014 1542350 cri.go:89] found id: ""
	I1213 16:15:40.273089 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.273133 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:40.273147 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:40.273227 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:40.298488 1542350 cri.go:89] found id: ""
	I1213 16:15:40.298553 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.298577 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:40.298595 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:40.298679 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:40.323131 1542350 cri.go:89] found id: ""
	I1213 16:15:40.323204 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.323228 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:40.323246 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:40.323368 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:40.360968 1542350 cri.go:89] found id: ""
	I1213 16:15:40.360996 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.361005 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:40.361011 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:40.361081 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:40.392530 1542350 cri.go:89] found id: ""
	I1213 16:15:40.392564 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.392573 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:40.392580 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:40.392648 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:40.428563 1542350 cri.go:89] found id: ""
	I1213 16:15:40.428588 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.428597 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:40.428603 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:40.428686 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:40.453234 1542350 cri.go:89] found id: ""
	I1213 16:15:40.453259 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.453267 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:40.453274 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:40.453373 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:40.477074 1542350 cri.go:89] found id: ""
	I1213 16:15:40.477099 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.477108 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:40.477117 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:40.477144 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:40.503301 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:40.503521 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:40.537464 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:40.537493 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:40.593489 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:40.593526 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:40.609479 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:40.609507 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:40.674540 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:40.665852    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.666546    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.668621    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.669211    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.670710    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:40.665852    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.666546    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.668621    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.669211    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.670710    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:43.175524 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:43.186492 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:43.186570 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:43.210685 1542350 cri.go:89] found id: ""
	I1213 16:15:43.210712 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.210721 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:43.210728 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:43.210787 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:43.237076 1542350 cri.go:89] found id: ""
	I1213 16:15:43.237103 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.237112 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:43.237118 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:43.237177 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:43.264682 1542350 cri.go:89] found id: ""
	I1213 16:15:43.264756 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.264771 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:43.264778 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:43.264842 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:43.290869 1542350 cri.go:89] found id: ""
	I1213 16:15:43.290896 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.290905 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:43.290912 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:43.290976 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:43.316279 1542350 cri.go:89] found id: ""
	I1213 16:15:43.316306 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.316315 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:43.316322 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:43.316383 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:43.354838 1542350 cri.go:89] found id: ""
	I1213 16:15:43.354864 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.354873 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:43.354880 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:43.354957 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:43.391172 1542350 cri.go:89] found id: ""
	I1213 16:15:43.391198 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.391207 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:43.391213 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:43.391274 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:43.418613 1542350 cri.go:89] found id: ""
	I1213 16:15:43.418647 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.418657 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:43.418667 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:43.418680 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:43.435343 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:43.435384 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:43.503984 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:43.495327    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.495856    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.497696    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.498105    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.499619    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:43.495327    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.495856    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.497696    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.498105    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.499619    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:43.504005 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:43.504018 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:43.530844 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:43.530882 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:43.563046 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:43.563079 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:46.121764 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:46.133205 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:46.133278 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:46.159902 1542350 cri.go:89] found id: ""
	I1213 16:15:46.159926 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.159935 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:46.159941 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:46.160016 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:46.189203 1542350 cri.go:89] found id: ""
	I1213 16:15:46.189236 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.189260 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:46.189267 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:46.189336 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:46.214186 1542350 cri.go:89] found id: ""
	I1213 16:15:46.214208 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.214216 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:46.214222 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:46.214281 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:46.244894 1542350 cri.go:89] found id: ""
	I1213 16:15:46.244923 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.244943 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:46.244949 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:46.245015 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:46.270668 1542350 cri.go:89] found id: ""
	I1213 16:15:46.270693 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.270702 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:46.270708 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:46.270771 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:46.296520 1542350 cri.go:89] found id: ""
	I1213 16:15:46.296565 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.296595 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:46.296603 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:46.296684 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:46.322387 1542350 cri.go:89] found id: ""
	I1213 16:15:46.322410 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.322418 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:46.322424 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:46.322492 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:46.359071 1542350 cri.go:89] found id: ""
	I1213 16:15:46.359093 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.359102 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:46.359111 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:46.359121 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:46.397696 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:46.397772 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:46.453341 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:46.453386 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:46.469917 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:46.469945 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:46.531639 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:46.523791    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.524434    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.526137    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.526426    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.527840    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:46.523791    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.524434    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.526137    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.526426    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.527840    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:46.531665 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:46.531678 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:49.058136 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:49.069039 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:49.069109 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:49.103600 1542350 cri.go:89] found id: ""
	I1213 16:15:49.103622 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.103630 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:49.103637 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:49.103694 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:49.133756 1542350 cri.go:89] found id: ""
	I1213 16:15:49.133778 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.133787 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:49.133793 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:49.133850 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:49.159824 1542350 cri.go:89] found id: ""
	I1213 16:15:49.159847 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.159856 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:49.159862 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:49.159919 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:49.188461 1542350 cri.go:89] found id: ""
	I1213 16:15:49.188527 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.188567 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:49.188598 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:49.188677 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:49.212316 1542350 cri.go:89] found id: ""
	I1213 16:15:49.212338 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.212346 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:49.212352 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:49.212424 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:49.236324 1542350 cri.go:89] found id: ""
	I1213 16:15:49.236348 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.236356 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:49.236362 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:49.236423 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:49.262438 1542350 cri.go:89] found id: ""
	I1213 16:15:49.262475 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.262484 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:49.262491 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:49.262578 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:49.292613 1542350 cri.go:89] found id: ""
	I1213 16:15:49.292637 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.292646 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:49.292655 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:49.292667 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:49.350224 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:49.350260 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:49.367633 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:49.367661 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:49.436081 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:49.427382    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.428117    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.429891    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.430411    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.431982    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:49.427382    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.428117    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.429891    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.430411    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.431982    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:49.436102 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:49.436115 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:49.461438 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:49.461474 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:51.994161 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:52.005864 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:52.005962 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:52.032002 1542350 cri.go:89] found id: ""
	I1213 16:15:52.032027 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.032052 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:52.032059 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:52.032118 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:52.058529 1542350 cri.go:89] found id: ""
	I1213 16:15:52.058552 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.058561 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:52.058567 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:52.058627 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:52.085765 1542350 cri.go:89] found id: ""
	I1213 16:15:52.085787 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.085795 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:52.085802 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:52.085860 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:52.113317 1542350 cri.go:89] found id: ""
	I1213 16:15:52.113389 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.113411 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:52.113430 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:52.113512 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:52.144343 1542350 cri.go:89] found id: ""
	I1213 16:15:52.144364 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.144373 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:52.144379 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:52.144450 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:52.170804 1542350 cri.go:89] found id: ""
	I1213 16:15:52.170876 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.170899 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:52.170916 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:52.171015 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:52.200043 1542350 cri.go:89] found id: ""
	I1213 16:15:52.200114 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.200137 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:52.200155 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:52.200254 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:52.226948 1542350 cri.go:89] found id: ""
	I1213 16:15:52.227022 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.227057 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:52.227086 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:52.227120 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:52.282092 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:52.282131 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:52.298201 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:52.298227 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:52.381110 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:52.372691    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.373232    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.374717    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.375078    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.376590    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:52.372691    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.373232    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.374717    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.375078    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.376590    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:52.381134 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:52.381148 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:52.409962 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:52.409994 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:54.942176 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:54.952757 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:54.952836 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:54.977644 1542350 cri.go:89] found id: ""
	I1213 16:15:54.977669 1542350 logs.go:282] 0 containers: []
	W1213 16:15:54.977678 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:54.977684 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:54.977742 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:55.005694 1542350 cri.go:89] found id: ""
	I1213 16:15:55.005722 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.005732 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:55.005740 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:55.005814 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:55.038377 1542350 cri.go:89] found id: ""
	I1213 16:15:55.038411 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.038422 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:55.038428 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:55.038493 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:55.065383 1542350 cri.go:89] found id: ""
	I1213 16:15:55.065417 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.065426 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:55.065433 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:55.065493 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:55.099813 1542350 cri.go:89] found id: ""
	I1213 16:15:55.099841 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.099850 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:55.099856 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:55.099931 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:55.128346 1542350 cri.go:89] found id: ""
	I1213 16:15:55.128368 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.128380 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:55.128387 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:55.128456 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:55.160925 1542350 cri.go:89] found id: ""
	I1213 16:15:55.160957 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.160966 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:55.160973 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:55.161037 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:55.188105 1542350 cri.go:89] found id: ""
	I1213 16:15:55.188132 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.188141 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:55.188151 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:55.188164 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:55.218869 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:55.218893 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:55.274258 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:55.274294 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:55.290251 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:55.290280 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:55.359521 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:55.350886    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.351709    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.353380    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.353940    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.355564    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:55.350886    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.351709    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.353380    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.353940    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.355564    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:55.359543 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:55.359556 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:57.887804 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:57.898226 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:57.898297 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:57.922697 1542350 cri.go:89] found id: ""
	I1213 16:15:57.922723 1542350 logs.go:282] 0 containers: []
	W1213 16:15:57.922732 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:57.922740 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:57.922821 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:57.947431 1542350 cri.go:89] found id: ""
	I1213 16:15:57.947457 1542350 logs.go:282] 0 containers: []
	W1213 16:15:57.947467 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:57.947473 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:57.947532 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:57.971494 1542350 cri.go:89] found id: ""
	I1213 16:15:57.971557 1542350 logs.go:282] 0 containers: []
	W1213 16:15:57.971582 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:57.971601 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:57.971679 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:57.999470 1542350 cri.go:89] found id: ""
	I1213 16:15:57.999495 1542350 logs.go:282] 0 containers: []
	W1213 16:15:57.999504 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:57.999510 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:57.999572 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:58.028740 1542350 cri.go:89] found id: ""
	I1213 16:15:58.028767 1542350 logs.go:282] 0 containers: []
	W1213 16:15:58.028777 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:58.028783 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:58.028849 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:58.054022 1542350 cri.go:89] found id: ""
	I1213 16:15:58.054043 1542350 logs.go:282] 0 containers: []
	W1213 16:15:58.054053 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:58.054059 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:58.054121 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:58.096720 1542350 cri.go:89] found id: ""
	I1213 16:15:58.096749 1542350 logs.go:282] 0 containers: []
	W1213 16:15:58.096758 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:58.096765 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:58.096825 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:58.133084 1542350 cri.go:89] found id: ""
	I1213 16:15:58.133114 1542350 logs.go:282] 0 containers: []
	W1213 16:15:58.133123 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:58.133133 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:58.133144 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:58.198401 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:58.198437 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:58.216601 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:58.216683 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:58.288456 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:58.279720    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.280528    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.282071    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.282726    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.283743    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:58.279720    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.280528    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.282071    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.282726    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.283743    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:58.288523 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:58.288544 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:58.314432 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:58.314470 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:00.851874 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:00.862470 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:00.862540 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:00.886360 1542350 cri.go:89] found id: ""
	I1213 16:16:00.886384 1542350 logs.go:282] 0 containers: []
	W1213 16:16:00.886392 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:00.886398 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:00.886458 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:00.910826 1542350 cri.go:89] found id: ""
	I1213 16:16:00.910851 1542350 logs.go:282] 0 containers: []
	W1213 16:16:00.910861 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:00.910867 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:00.910925 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:00.935111 1542350 cri.go:89] found id: ""
	I1213 16:16:00.935141 1542350 logs.go:282] 0 containers: []
	W1213 16:16:00.935150 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:00.935156 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:00.935214 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:00.960959 1542350 cri.go:89] found id: ""
	I1213 16:16:00.960982 1542350 logs.go:282] 0 containers: []
	W1213 16:16:00.960991 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:00.960997 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:00.961057 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:00.985954 1542350 cri.go:89] found id: ""
	I1213 16:16:00.985977 1542350 logs.go:282] 0 containers: []
	W1213 16:16:00.985986 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:00.985991 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:00.986052 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:01.011865 1542350 cri.go:89] found id: ""
	I1213 16:16:01.011889 1542350 logs.go:282] 0 containers: []
	W1213 16:16:01.011897 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:01.011903 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:01.011966 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:01.041391 1542350 cri.go:89] found id: ""
	I1213 16:16:01.041412 1542350 logs.go:282] 0 containers: []
	W1213 16:16:01.041421 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:01.041427 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:01.041486 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:01.065980 1542350 cri.go:89] found id: ""
	I1213 16:16:01.066001 1542350 logs.go:282] 0 containers: []
	W1213 16:16:01.066010 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:01.066020 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:01.066035 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:01.125520 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:01.125602 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:01.143155 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:01.143228 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:01.224569 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:01.213708    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.214378    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.216451    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.217392    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.219480    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:01.213708    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.214378    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.216451    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.217392    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.219480    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:01.224588 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:01.224602 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:01.251006 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:01.251045 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:03.780250 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:03.794327 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:03.794399 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:03.819181 1542350 cri.go:89] found id: ""
	I1213 16:16:03.819209 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.819218 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:03.819224 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:03.819285 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:03.845225 1542350 cri.go:89] found id: ""
	I1213 16:16:03.845248 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.845257 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:03.845264 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:03.845324 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:03.873944 1542350 cri.go:89] found id: ""
	I1213 16:16:03.873966 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.873975 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:03.873981 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:03.874042 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:03.899655 1542350 cri.go:89] found id: ""
	I1213 16:16:03.899685 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.899694 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:03.899701 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:03.899763 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:03.927094 1542350 cri.go:89] found id: ""
	I1213 16:16:03.927122 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.927131 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:03.927137 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:03.927196 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:03.952240 1542350 cri.go:89] found id: ""
	I1213 16:16:03.952267 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.952276 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:03.952282 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:03.952340 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:03.976494 1542350 cri.go:89] found id: ""
	I1213 16:16:03.976520 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.976529 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:03.976535 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:03.976605 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:04.001277 1542350 cri.go:89] found id: ""
	I1213 16:16:04.001304 1542350 logs.go:282] 0 containers: []
	W1213 16:16:04.001313 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:04.001324 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:04.001339 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:04.061393 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:04.061428 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:04.078258 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:04.078290 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:04.162687 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:04.153708    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.154478    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.156333    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.156798    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.158424    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:04.153708    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.154478    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.156333    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.156798    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.158424    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:04.162710 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:04.162723 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:04.187844 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:04.187879 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:06.716865 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:06.727125 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:06.727193 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:06.752991 1542350 cri.go:89] found id: ""
	I1213 16:16:06.753015 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.753024 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:06.753030 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:06.753089 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:06.777092 1542350 cri.go:89] found id: ""
	I1213 16:16:06.777116 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.777125 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:06.777130 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:06.777188 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:06.805182 1542350 cri.go:89] found id: ""
	I1213 16:16:06.805256 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.805278 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:06.805292 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:06.805363 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:06.833454 1542350 cri.go:89] found id: ""
	I1213 16:16:06.833477 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.833486 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:06.833492 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:06.833553 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:06.864279 1542350 cri.go:89] found id: ""
	I1213 16:16:06.864303 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.864311 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:06.864318 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:06.864379 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:06.889879 1542350 cri.go:89] found id: ""
	I1213 16:16:06.889905 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.889914 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:06.889920 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:06.889980 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:06.913566 1542350 cri.go:89] found id: ""
	I1213 16:16:06.913600 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.913609 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:06.913615 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:06.913682 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:06.939090 1542350 cri.go:89] found id: ""
	I1213 16:16:06.939161 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.939199 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:06.939226 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:06.939253 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:06.994546 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:06.994587 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:07.012062 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:07.012099 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:07.079574 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:07.070333    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.070833    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.072777    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.073334    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.074988    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:07.070333    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.070833    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.072777    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.073334    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.074988    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:07.079597 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:07.079609 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:07.106688 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:07.106772 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:09.648446 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:09.659497 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:09.659572 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:09.685004 1542350 cri.go:89] found id: ""
	I1213 16:16:09.685031 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.685040 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:09.685047 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:09.685106 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:09.710322 1542350 cri.go:89] found id: ""
	I1213 16:16:09.710350 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.710359 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:09.710365 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:09.710424 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:09.736183 1542350 cri.go:89] found id: ""
	I1213 16:16:09.736209 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.736218 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:09.736225 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:09.736328 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:09.761808 1542350 cri.go:89] found id: ""
	I1213 16:16:09.761831 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.761839 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:09.761846 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:09.761907 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:09.788666 1542350 cri.go:89] found id: ""
	I1213 16:16:09.788690 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.788699 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:09.788705 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:09.788767 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:09.815565 1542350 cri.go:89] found id: ""
	I1213 16:16:09.815590 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.815598 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:09.815604 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:09.815663 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:09.841443 1542350 cri.go:89] found id: ""
	I1213 16:16:09.841466 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.841475 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:09.841481 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:09.841538 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:09.870775 1542350 cri.go:89] found id: ""
	I1213 16:16:09.870798 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.870806 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:09.870818 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:09.870829 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:09.927243 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:09.927279 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:09.944116 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:09.944150 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:10.018299 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:10.008262    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.009034    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.011237    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.011729    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.013703    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:10.008262    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.009034    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.011237    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.011729    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.013703    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:10.018334 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:10.018348 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:10.062337 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:10.062384 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:12.610748 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:12.622191 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:12.622266 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:12.654912 1542350 cri.go:89] found id: ""
	I1213 16:16:12.654939 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.654948 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:12.654955 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:12.655017 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:12.679878 1542350 cri.go:89] found id: ""
	I1213 16:16:12.679904 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.679913 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:12.679919 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:12.679981 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:12.708594 1542350 cri.go:89] found id: ""
	I1213 16:16:12.708619 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.708628 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:12.708641 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:12.708703 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:12.734832 1542350 cri.go:89] found id: ""
	I1213 16:16:12.734857 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.734866 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:12.734872 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:12.734931 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:12.760756 1542350 cri.go:89] found id: ""
	I1213 16:16:12.760784 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.760793 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:12.760799 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:12.760860 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:12.786434 1542350 cri.go:89] found id: ""
	I1213 16:16:12.786470 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.786479 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:12.786486 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:12.786558 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:12.810666 1542350 cri.go:89] found id: ""
	I1213 16:16:12.810699 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.810708 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:12.810714 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:12.810779 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:12.835161 1542350 cri.go:89] found id: ""
	I1213 16:16:12.835206 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.835216 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:12.835225 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:12.835238 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:12.851412 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:12.851438 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:12.919002 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:12.910126    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.910878    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.912652    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.913309    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.915168    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:12.910126    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.910878    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.912652    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.913309    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.915168    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:12.919032 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:12.919045 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:12.945016 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:12.945054 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:12.975303 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:12.975353 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:15.533437 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:15.545434 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:15.545514 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:15.570277 1542350 cri.go:89] found id: ""
	I1213 16:16:15.570303 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.570353 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:15.570362 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:15.570427 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:15.602983 1542350 cri.go:89] found id: ""
	I1213 16:16:15.603009 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.603017 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:15.603023 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:15.603082 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:15.631137 1542350 cri.go:89] found id: ""
	I1213 16:16:15.631172 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.631181 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:15.631187 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:15.631245 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:15.664783 1542350 cri.go:89] found id: ""
	I1213 16:16:15.664810 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.664819 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:15.664825 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:15.664886 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:15.691237 1542350 cri.go:89] found id: ""
	I1213 16:16:15.691264 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.691274 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:15.691280 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:15.691368 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:15.715449 1542350 cri.go:89] found id: ""
	I1213 16:16:15.715473 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.715482 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:15.715489 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:15.715553 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:15.740667 1542350 cri.go:89] found id: ""
	I1213 16:16:15.740692 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.740701 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:15.740707 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:15.740770 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:15.765160 1542350 cri.go:89] found id: ""
	I1213 16:16:15.765182 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.765191 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:15.765200 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:15.765212 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:15.820427 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:15.820466 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:15.836513 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:15.836541 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:15.903389 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:15.894908    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.895741    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.897308    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.897848    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.899415    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:15.894908    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.895741    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.897308    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.897848    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.899415    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:15.903412 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:15.903427 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:15.928787 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:15.928825 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:18.458780 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:18.469268 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:18.469341 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:18.497781 1542350 cri.go:89] found id: ""
	I1213 16:16:18.497811 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.497824 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:18.497831 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:18.497918 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:18.522772 1542350 cri.go:89] found id: ""
	I1213 16:16:18.522799 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.522808 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:18.522815 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:18.522874 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:18.549419 1542350 cri.go:89] found id: ""
	I1213 16:16:18.549443 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.549452 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:18.549457 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:18.549524 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:18.573853 1542350 cri.go:89] found id: ""
	I1213 16:16:18.573881 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.573889 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:18.573896 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:18.573960 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:18.604140 1542350 cri.go:89] found id: ""
	I1213 16:16:18.604167 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.604188 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:18.604194 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:18.604264 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:18.637649 1542350 cri.go:89] found id: ""
	I1213 16:16:18.637677 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.637686 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:18.637692 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:18.637752 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:18.668019 1542350 cri.go:89] found id: ""
	I1213 16:16:18.668045 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.668053 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:18.668059 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:18.668120 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:18.694456 1542350 cri.go:89] found id: ""
	I1213 16:16:18.694482 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.694493 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:18.694503 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:18.694515 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:18.722967 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:18.722995 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:18.780808 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:18.780844 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:18.797393 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:18.797421 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:18.866061 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:18.858113    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.858623    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.860137    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.860610    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.862072    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:18.858113    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.858623    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.860137    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.860610    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.862072    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:18.866083 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:18.866096 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:21.391436 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:21.403266 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:21.403363 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:21.429372 1542350 cri.go:89] found id: ""
	I1213 16:16:21.429405 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.429415 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:21.429420 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:21.429479 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:21.454218 1542350 cri.go:89] found id: ""
	I1213 16:16:21.454287 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.454311 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:21.454329 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:21.454420 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:21.478016 1542350 cri.go:89] found id: ""
	I1213 16:16:21.478041 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.478049 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:21.478055 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:21.478112 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:21.504574 1542350 cri.go:89] found id: ""
	I1213 16:16:21.504612 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.504622 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:21.504629 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:21.504692 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:21.531727 1542350 cri.go:89] found id: ""
	I1213 16:16:21.531761 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.531770 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:21.531777 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:21.531836 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:21.556964 1542350 cri.go:89] found id: ""
	I1213 16:16:21.556999 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.557010 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:21.557018 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:21.557077 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:21.592445 1542350 cri.go:89] found id: ""
	I1213 16:16:21.592509 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.592533 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:21.592550 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:21.592645 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:21.620898 1542350 cri.go:89] found id: ""
	I1213 16:16:21.620920 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.620928 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:21.620937 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:21.620949 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:21.682810 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:21.682846 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:21.699275 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:21.699375 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:21.766336 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:21.758580    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.759107    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.760601    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.761028    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.762478    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:21.758580    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.759107    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.760601    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.761028    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.762478    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:21.766397 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:21.766426 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:21.791266 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:21.791300 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:24.319481 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:24.330216 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:24.330310 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:24.369003 1542350 cri.go:89] found id: ""
	I1213 16:16:24.369033 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.369041 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:24.369047 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:24.369106 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:24.396473 1542350 cri.go:89] found id: ""
	I1213 16:16:24.396502 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.396511 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:24.396516 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:24.396580 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:24.436915 1542350 cri.go:89] found id: ""
	I1213 16:16:24.436939 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.436948 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:24.436953 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:24.437013 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:24.465118 1542350 cri.go:89] found id: ""
	I1213 16:16:24.465139 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.465147 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:24.465153 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:24.465211 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:24.490097 1542350 cri.go:89] found id: ""
	I1213 16:16:24.490121 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.490130 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:24.490136 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:24.490196 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:24.520031 1542350 cri.go:89] found id: ""
	I1213 16:16:24.520096 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.520120 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:24.520141 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:24.520214 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:24.545891 1542350 cri.go:89] found id: ""
	I1213 16:16:24.545919 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.545928 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:24.545933 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:24.546014 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:24.574276 1542350 cri.go:89] found id: ""
	I1213 16:16:24.574313 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.574323 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:24.574353 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:24.574387 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:24.611068 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:24.611145 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:24.677764 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:24.677808 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:24.696759 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:24.696802 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:24.773564 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:24.765018    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.765700    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.767375    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.767981    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.769749    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:24.765018    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.765700    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.767375    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.767981    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.769749    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:24.773586 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:24.773598 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:27.299826 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:27.310825 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:27.310902 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:27.341771 1542350 cri.go:89] found id: ""
	I1213 16:16:27.341794 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.341803 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:27.341810 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:27.341876 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:27.369884 1542350 cri.go:89] found id: ""
	I1213 16:16:27.369908 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.369917 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:27.369923 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:27.369988 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:27.402575 1542350 cri.go:89] found id: ""
	I1213 16:16:27.402598 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.402606 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:27.402612 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:27.402680 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:27.429116 1542350 cri.go:89] found id: ""
	I1213 16:16:27.429157 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.429169 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:27.429176 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:27.429245 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:27.456147 1542350 cri.go:89] found id: ""
	I1213 16:16:27.456174 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.456183 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:27.456191 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:27.456254 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:27.481262 1542350 cri.go:89] found id: ""
	I1213 16:16:27.481288 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.481297 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:27.481304 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:27.481370 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:27.507140 1542350 cri.go:89] found id: ""
	I1213 16:16:27.507169 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.507179 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:27.507185 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:27.507269 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:27.532060 1542350 cri.go:89] found id: ""
	I1213 16:16:27.532139 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.532162 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:27.532180 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:27.532193 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:27.588083 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:27.588123 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:27.605875 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:27.605906 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:27.677799 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:27.667405    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.668242    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.669729    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.670202    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.673720    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:27.667405    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.668242    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.669729    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.670202    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.673720    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:27.677822 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:27.677834 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:27.703668 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:27.703704 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:30.232616 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:30.244334 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:30.244408 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:30.269730 1542350 cri.go:89] found id: ""
	I1213 16:16:30.269757 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.269765 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:30.269771 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:30.269830 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:30.296665 1542350 cri.go:89] found id: ""
	I1213 16:16:30.296693 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.296702 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:30.296709 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:30.296832 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:30.322172 1542350 cri.go:89] found id: ""
	I1213 16:16:30.322251 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.322276 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:30.322296 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:30.322405 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:30.364083 1542350 cri.go:89] found id: ""
	I1213 16:16:30.364113 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.364125 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:30.364138 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:30.364206 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:30.405727 1542350 cri.go:89] found id: ""
	I1213 16:16:30.405751 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.405759 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:30.405765 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:30.405825 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:30.432819 1542350 cri.go:89] found id: ""
	I1213 16:16:30.432846 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.432855 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:30.432862 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:30.432921 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:30.458202 1542350 cri.go:89] found id: ""
	I1213 16:16:30.458228 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.458237 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:30.458243 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:30.458310 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:30.482950 1542350 cri.go:89] found id: ""
	I1213 16:16:30.482977 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.482987 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:30.482996 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:30.483008 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:30.507886 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:30.507921 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:30.538090 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:30.538159 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:30.593644 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:30.593729 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:30.610246 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:30.610272 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:30.684359 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:30.676539    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.677025    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.678556    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.678874    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.680436    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:30.676539    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.677025    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.678556    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.678874    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.680436    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:33.184602 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:33.195455 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:33.195556 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:33.225437 1542350 cri.go:89] found id: ""
	I1213 16:16:33.225459 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.225468 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:33.225474 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:33.225541 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:33.250024 1542350 cri.go:89] found id: ""
	I1213 16:16:33.250089 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.250113 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:33.250131 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:33.250218 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:33.275721 1542350 cri.go:89] found id: ""
	I1213 16:16:33.275747 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.275755 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:33.275762 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:33.275823 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:33.300346 1542350 cri.go:89] found id: ""
	I1213 16:16:33.300368 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.300377 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:33.300383 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:33.300442 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:33.324866 1542350 cri.go:89] found id: ""
	I1213 16:16:33.324889 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.324897 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:33.324904 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:33.324963 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:33.354142 1542350 cri.go:89] found id: ""
	I1213 16:16:33.354216 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.354239 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:33.354257 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:33.354347 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:33.388195 1542350 cri.go:89] found id: ""
	I1213 16:16:33.388216 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.388224 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:33.388230 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:33.388286 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:33.416283 1542350 cri.go:89] found id: ""
	I1213 16:16:33.416306 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.416314 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:33.416325 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:33.416337 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:33.432175 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:33.432206 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:33.499040 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:33.490874    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.491494    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.493033    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.493510    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.495051    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:33.490874    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.491494    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.493033    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.493510    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.495051    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:33.499062 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:33.499074 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:33.524925 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:33.524958 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:33.554998 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:33.555026 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:36.110953 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:36.121861 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:36.121930 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:36.146369 1542350 cri.go:89] found id: ""
	I1213 16:16:36.146429 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.146450 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:36.146476 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:36.146557 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:36.171595 1542350 cri.go:89] found id: ""
	I1213 16:16:36.171617 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.171625 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:36.171631 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:36.171693 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:36.196869 1542350 cri.go:89] found id: ""
	I1213 16:16:36.196891 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.196900 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:36.196906 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:36.196963 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:36.221290 1542350 cri.go:89] found id: ""
	I1213 16:16:36.221317 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.221326 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:36.221338 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:36.221400 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:36.246254 1542350 cri.go:89] found id: ""
	I1213 16:16:36.246280 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.246289 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:36.246294 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:36.246352 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:36.276463 1542350 cri.go:89] found id: ""
	I1213 16:16:36.276486 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.276494 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:36.276500 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:36.276565 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:36.302414 1542350 cri.go:89] found id: ""
	I1213 16:16:36.302446 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.302454 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:36.302460 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:36.302530 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:36.327676 1542350 cri.go:89] found id: ""
	I1213 16:16:36.327753 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.327770 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:36.327781 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:36.327793 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:36.347589 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:36.347658 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:36.422910 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:36.414535    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.415436    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.417255    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.417652    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.419100    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:36.414535    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.415436    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.417255    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.417652    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.419100    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:36.422940 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:36.422968 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:36.449077 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:36.449114 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:36.476904 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:36.476935 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:39.032927 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:39.043398 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:39.043466 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:39.068941 1542350 cri.go:89] found id: ""
	I1213 16:16:39.068968 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.068977 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:39.068983 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:39.069040 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:39.094525 1542350 cri.go:89] found id: ""
	I1213 16:16:39.094548 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.094557 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:39.094564 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:39.094626 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:39.118854 1542350 cri.go:89] found id: ""
	I1213 16:16:39.118875 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.118884 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:39.118890 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:39.118946 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:39.147615 1542350 cri.go:89] found id: ""
	I1213 16:16:39.147642 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.147651 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:39.147657 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:39.147719 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:39.173015 1542350 cri.go:89] found id: ""
	I1213 16:16:39.173038 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.173047 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:39.173053 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:39.173121 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:39.198427 1542350 cri.go:89] found id: ""
	I1213 16:16:39.198453 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.198462 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:39.198468 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:39.198525 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:39.223491 1542350 cri.go:89] found id: ""
	I1213 16:16:39.223514 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.223522 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:39.223528 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:39.223587 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:39.254117 1542350 cri.go:89] found id: ""
	I1213 16:16:39.254148 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.254157 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:39.254166 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:39.254178 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:39.313667 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:39.313706 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:39.331137 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:39.331215 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:39.414971 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:39.406302    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.407133    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.408880    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.409415    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.411052    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:39.406302    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.407133    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.408880    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.409415    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.411052    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:39.414990 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:39.415003 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:39.440561 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:39.440604 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:41.973087 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:41.983385 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:41.983456 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:42.010547 1542350 cri.go:89] found id: ""
	I1213 16:16:42.010644 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.010658 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:42.010666 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:42.010780 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:42.041355 1542350 cri.go:89] found id: ""
	I1213 16:16:42.041379 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.041388 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:42.041394 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:42.041462 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:42.074781 1542350 cri.go:89] found id: ""
	I1213 16:16:42.074808 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.074818 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:42.074825 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:42.074895 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:42.105943 1542350 cri.go:89] found id: ""
	I1213 16:16:42.105972 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.105980 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:42.105987 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:42.106062 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:42.144036 1542350 cri.go:89] found id: ""
	I1213 16:16:42.144062 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.144070 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:42.144077 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:42.144144 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:42.177438 1542350 cri.go:89] found id: ""
	I1213 16:16:42.177464 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.177474 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:42.177482 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:42.177555 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:42.209616 1542350 cri.go:89] found id: ""
	I1213 16:16:42.209644 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.209653 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:42.209662 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:42.209730 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:42.240251 1542350 cri.go:89] found id: ""
	I1213 16:16:42.240283 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.240293 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:42.240303 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:42.240317 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:42.274974 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:42.275008 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:42.333409 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:42.333488 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:42.353909 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:42.353998 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:42.431547 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:42.422852    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.423662    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.425409    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.425733    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.427251    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:42.422852    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.423662    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.425409    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.425733    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.427251    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:42.431570 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:42.431582 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:44.957982 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:44.968708 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:44.968778 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:44.998179 1542350 cri.go:89] found id: ""
	I1213 16:16:44.998205 1542350 logs.go:282] 0 containers: []
	W1213 16:16:44.998214 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:44.998220 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:44.998281 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:45.055672 1542350 cri.go:89] found id: ""
	I1213 16:16:45.055695 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.055705 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:45.055712 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:45.055785 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:45.112504 1542350 cri.go:89] found id: ""
	I1213 16:16:45.112598 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.112625 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:45.112646 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:45.112821 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:45.148966 1542350 cri.go:89] found id: ""
	I1213 16:16:45.148993 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.149002 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:45.149008 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:45.149081 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:45.215276 1542350 cri.go:89] found id: ""
	I1213 16:16:45.215383 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.215547 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:45.215573 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:45.215685 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:45.266343 1542350 cri.go:89] found id: ""
	I1213 16:16:45.266422 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.266448 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:45.266469 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:45.266569 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:45.311801 1542350 cri.go:89] found id: ""
	I1213 16:16:45.311877 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.311905 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:45.311925 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:45.312039 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:45.345856 1542350 cri.go:89] found id: ""
	I1213 16:16:45.345884 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.345894 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:45.345904 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:45.345928 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:45.416309 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:45.416392 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:45.433509 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:45.433593 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:45.504820 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:45.495815    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.496673    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.498435    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.498745    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.500220    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:45.495815    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.496673    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.498435    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.498745    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.500220    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:45.504841 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:45.504855 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:45.530797 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:45.530836 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:48.061294 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:48.072582 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:48.072653 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:48.101139 1542350 cri.go:89] found id: ""
	I1213 16:16:48.101164 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.101173 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:48.101179 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:48.101250 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:48.127077 1542350 cri.go:89] found id: ""
	I1213 16:16:48.127100 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.127109 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:48.127115 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:48.127179 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:48.152708 1542350 cri.go:89] found id: ""
	I1213 16:16:48.152731 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.152740 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:48.152746 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:48.152806 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:48.183194 1542350 cri.go:89] found id: ""
	I1213 16:16:48.183220 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.183228 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:48.183235 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:48.183295 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:48.208544 1542350 cri.go:89] found id: ""
	I1213 16:16:48.208612 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.208638 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:48.208658 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:48.208773 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:48.234599 1542350 cri.go:89] found id: ""
	I1213 16:16:48.234633 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.234642 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:48.234667 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:48.234745 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:48.259586 1542350 cri.go:89] found id: ""
	I1213 16:16:48.259614 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.259623 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:48.259629 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:48.259712 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:48.283477 1542350 cri.go:89] found id: ""
	I1213 16:16:48.283499 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.283509 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:48.283542 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:48.283561 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:48.339116 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:48.339190 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:48.360686 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:48.360767 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:48.433619 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:48.425212    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.425811    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.427513    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.427900    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.429452    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:48.425212    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.425811    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.427513    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.427900    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.429452    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:48.433643 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:48.433655 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:48.458793 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:48.458837 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:50.988521 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:50.999862 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:50.999930 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:51.029019 1542350 cri.go:89] found id: ""
	I1213 16:16:51.029045 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.029054 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:51.029060 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:51.029132 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:51.058195 1542350 cri.go:89] found id: ""
	I1213 16:16:51.058222 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.058231 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:51.058237 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:51.058297 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:51.083486 1542350 cri.go:89] found id: ""
	I1213 16:16:51.083512 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.083521 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:51.083527 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:51.083589 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:51.108698 1542350 cri.go:89] found id: ""
	I1213 16:16:51.108723 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.108733 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:51.108739 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:51.108801 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:51.133979 1542350 cri.go:89] found id: ""
	I1213 16:16:51.134003 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.134011 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:51.134017 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:51.134074 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:51.161527 1542350 cri.go:89] found id: ""
	I1213 16:16:51.161552 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.161562 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:51.161568 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:51.161627 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:51.186814 1542350 cri.go:89] found id: ""
	I1213 16:16:51.186841 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.186850 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:51.186856 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:51.186916 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:51.216180 1542350 cri.go:89] found id: ""
	I1213 16:16:51.216212 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.216221 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:51.216230 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:51.216245 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:51.273877 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:51.273919 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:51.291469 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:51.291502 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:51.365379 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:51.356884    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.357610    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.359231    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.359765    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.361313    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:51.356884    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.357610    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.359231    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.359765    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.361313    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:51.365447 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:51.365471 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:51.393925 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:51.393997 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:53.927124 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:53.937787 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:53.937865 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:53.965198 1542350 cri.go:89] found id: ""
	I1213 16:16:53.965222 1542350 logs.go:282] 0 containers: []
	W1213 16:16:53.965230 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:53.965236 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:53.965295 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:53.990127 1542350 cri.go:89] found id: ""
	I1213 16:16:53.990153 1542350 logs.go:282] 0 containers: []
	W1213 16:16:53.990162 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:53.990168 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:53.990227 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:54.017573 1542350 cri.go:89] found id: ""
	I1213 16:16:54.017600 1542350 logs.go:282] 0 containers: []
	W1213 16:16:54.017610 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:54.017627 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:54.017691 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:54.042201 1542350 cri.go:89] found id: ""
	I1213 16:16:54.042223 1542350 logs.go:282] 0 containers: []
	W1213 16:16:54.042232 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:54.042239 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:54.042297 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:54.069040 1542350 cri.go:89] found id: ""
	I1213 16:16:54.069064 1542350 logs.go:282] 0 containers: []
	W1213 16:16:54.069072 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:54.069079 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:54.069139 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:54.094593 1542350 cri.go:89] found id: ""
	I1213 16:16:54.094614 1542350 logs.go:282] 0 containers: []
	W1213 16:16:54.094624 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:54.094630 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:54.094692 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:54.118976 1542350 cri.go:89] found id: ""
	I1213 16:16:54.119047 1542350 logs.go:282] 0 containers: []
	W1213 16:16:54.119070 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:54.119088 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:54.119162 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:54.145323 1542350 cri.go:89] found id: ""
	I1213 16:16:54.145346 1542350 logs.go:282] 0 containers: []
	W1213 16:16:54.145355 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:54.145364 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:54.145375 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:54.170838 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:54.170873 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:54.198725 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:54.198752 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:54.253610 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:54.253646 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:54.272399 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:54.272428 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:54.360945 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:54.351253    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.352548    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.354388    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.355099    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.356700    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:54.351253    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.352548    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.354388    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.355099    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.356700    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:56.861910 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:56.873998 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:56.874110 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:56.904398 1542350 cri.go:89] found id: ""
	I1213 16:16:56.904423 1542350 logs.go:282] 0 containers: []
	W1213 16:16:56.904432 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:56.904438 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:56.904498 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:56.928756 1542350 cri.go:89] found id: ""
	I1213 16:16:56.928783 1542350 logs.go:282] 0 containers: []
	W1213 16:16:56.928792 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:56.928798 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:56.928856 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:56.952449 1542350 cri.go:89] found id: ""
	I1213 16:16:56.952473 1542350 logs.go:282] 0 containers: []
	W1213 16:16:56.952481 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:56.952487 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:56.952544 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:56.976949 1542350 cri.go:89] found id: ""
	I1213 16:16:56.976973 1542350 logs.go:282] 0 containers: []
	W1213 16:16:56.976981 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:56.976988 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:56.977074 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:57.001996 1542350 cri.go:89] found id: ""
	I1213 16:16:57.002023 1542350 logs.go:282] 0 containers: []
	W1213 16:16:57.002032 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:57.002039 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:57.002107 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:57.033494 1542350 cri.go:89] found id: ""
	I1213 16:16:57.033519 1542350 logs.go:282] 0 containers: []
	W1213 16:16:57.033527 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:57.033533 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:57.033590 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:57.057055 1542350 cri.go:89] found id: ""
	I1213 16:16:57.057082 1542350 logs.go:282] 0 containers: []
	W1213 16:16:57.057090 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:57.057096 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:57.057153 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:57.086023 1542350 cri.go:89] found id: ""
	I1213 16:16:57.086047 1542350 logs.go:282] 0 containers: []
	W1213 16:16:57.086057 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:57.086066 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:57.086078 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:57.140604 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:57.140639 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:57.156471 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:57.156501 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:57.226365 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:57.217689   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.218500   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.220107   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.220657   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.222574   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:57.217689   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.218500   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.220107   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.220657   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.222574   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:57.226409 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:57.226425 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:57.251875 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:57.251911 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:59.781524 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:59.792544 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:59.792620 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:59.817081 1542350 cri.go:89] found id: ""
	I1213 16:16:59.817108 1542350 logs.go:282] 0 containers: []
	W1213 16:16:59.817123 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:59.817130 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:59.817197 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:59.854425 1542350 cri.go:89] found id: ""
	I1213 16:16:59.854453 1542350 logs.go:282] 0 containers: []
	W1213 16:16:59.854463 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:59.854469 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:59.854529 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:59.891724 1542350 cri.go:89] found id: ""
	I1213 16:16:59.891750 1542350 logs.go:282] 0 containers: []
	W1213 16:16:59.891759 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:59.891766 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:59.891826 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:59.921656 1542350 cri.go:89] found id: ""
	I1213 16:16:59.921682 1542350 logs.go:282] 0 containers: []
	W1213 16:16:59.921691 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:59.921697 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:59.921757 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:59.946905 1542350 cri.go:89] found id: ""
	I1213 16:16:59.946930 1542350 logs.go:282] 0 containers: []
	W1213 16:16:59.946943 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:59.946949 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:59.947011 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:59.974061 1542350 cri.go:89] found id: ""
	I1213 16:16:59.974087 1542350 logs.go:282] 0 containers: []
	W1213 16:16:59.974096 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:59.974103 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:59.974181 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:00.003912 1542350 cri.go:89] found id: ""
	I1213 16:17:00.003945 1542350 logs.go:282] 0 containers: []
	W1213 16:17:00.003955 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:00.003962 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:00.004041 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:00.129167 1542350 cri.go:89] found id: ""
	I1213 16:17:00.129242 1542350 logs.go:282] 0 containers: []
	W1213 16:17:00.129267 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:00.129291 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:00.129321 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:00.325276 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:00.316397   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.317316   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.318748   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.319511   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.320676   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:00.316397   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.317316   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.318748   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.319511   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.320676   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:00.325303 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:00.325317 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:00.357630 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:00.357684 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:00.417887 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:00.417929 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:00.512817 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:00.512861 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:03.034231 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:03.045928 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:03.046041 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:03.073150 1542350 cri.go:89] found id: ""
	I1213 16:17:03.073178 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.073187 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:03.073194 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:03.073257 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:03.100010 1542350 cri.go:89] found id: ""
	I1213 16:17:03.100036 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.100046 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:03.100052 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:03.100118 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:03.126901 1542350 cri.go:89] found id: ""
	I1213 16:17:03.126929 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.126938 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:03.126944 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:03.127007 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:03.158512 1542350 cri.go:89] found id: ""
	I1213 16:17:03.158538 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.158547 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:03.158554 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:03.158623 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:03.186730 1542350 cri.go:89] found id: ""
	I1213 16:17:03.186757 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.186766 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:03.186773 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:03.186843 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:03.213877 1542350 cri.go:89] found id: ""
	I1213 16:17:03.213913 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.213922 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:03.213929 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:03.214000 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:03.244284 1542350 cri.go:89] found id: ""
	I1213 16:17:03.244360 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.244382 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:03.244401 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:03.244496 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:03.272102 1542350 cri.go:89] found id: ""
	I1213 16:17:03.272193 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.272210 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:03.272221 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:03.272234 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:03.330001 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:03.330036 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:03.347681 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:03.347716 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:03.430544 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:03.421427   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.422228   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.424046   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.424538   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.426050   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:03.421427   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.422228   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.424046   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.424538   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.426050   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:03.430566 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:03.430581 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:03.457512 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:03.457552 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:05.988326 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:06.000598 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:06.000678 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:06.036782 1542350 cri.go:89] found id: ""
	I1213 16:17:06.036859 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.036876 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:06.036891 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:06.036960 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:06.066595 1542350 cri.go:89] found id: ""
	I1213 16:17:06.066623 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.066633 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:06.066640 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:06.066705 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:06.095017 1542350 cri.go:89] found id: ""
	I1213 16:17:06.095047 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.095057 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:06.095064 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:06.095146 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:06.123113 1542350 cri.go:89] found id: ""
	I1213 16:17:06.123140 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.123150 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:06.123156 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:06.123223 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:06.150821 1542350 cri.go:89] found id: ""
	I1213 16:17:06.150847 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.150856 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:06.150862 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:06.150925 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:06.176578 1542350 cri.go:89] found id: ""
	I1213 16:17:06.176608 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.176616 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:06.176623 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:06.176690 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:06.207351 1542350 cri.go:89] found id: ""
	I1213 16:17:06.207387 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.207397 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:06.207404 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:06.207468 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:06.233849 1542350 cri.go:89] found id: ""
	I1213 16:17:06.233872 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.233881 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:06.233890 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:06.233907 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:06.250685 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:06.250716 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:06.319519 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:06.310314   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.310885   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.312626   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.313247   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.315150   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:06.310314   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.310885   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.312626   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.313247   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.315150   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:06.319544 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:06.319566 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:06.346128 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:06.346163 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:06.386358 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:06.386439 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:08.950033 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:08.960761 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:08.960908 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:08.984689 1542350 cri.go:89] found id: ""
	I1213 16:17:08.984727 1542350 logs.go:282] 0 containers: []
	W1213 16:17:08.984737 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:08.984760 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:08.984839 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:09.014786 1542350 cri.go:89] found id: ""
	I1213 16:17:09.014811 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.014820 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:09.014826 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:09.014890 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:09.044222 1542350 cri.go:89] found id: ""
	I1213 16:17:09.044257 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.044267 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:09.044276 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:09.044344 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:09.077612 1542350 cri.go:89] found id: ""
	I1213 16:17:09.077685 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.077708 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:09.077726 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:09.077815 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:09.105512 1542350 cri.go:89] found id: ""
	I1213 16:17:09.105535 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.105545 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:09.105551 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:09.105617 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:09.129780 1542350 cri.go:89] found id: ""
	I1213 16:17:09.129803 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.129811 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:09.129817 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:09.129878 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:09.154967 1542350 cri.go:89] found id: ""
	I1213 16:17:09.154993 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.155002 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:09.155009 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:09.155076 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:09.179699 1542350 cri.go:89] found id: ""
	I1213 16:17:09.179763 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.179789 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:09.179806 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:09.179817 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:09.235549 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:09.235580 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:09.251403 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:09.251431 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:09.319531 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:09.311333   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.311986   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.313546   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.314046   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.315495   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:09.311333   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.311986   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.313546   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.314046   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.315495   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:09.319549 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:09.319561 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:09.346608 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:09.346650 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:11.878089 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:11.889358 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:11.889432 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:11.915293 1542350 cri.go:89] found id: ""
	I1213 16:17:11.915330 1542350 logs.go:282] 0 containers: []
	W1213 16:17:11.915339 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:11.915346 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:11.915408 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:11.945256 1542350 cri.go:89] found id: ""
	I1213 16:17:11.945334 1542350 logs.go:282] 0 containers: []
	W1213 16:17:11.945359 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:11.945374 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:11.945452 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:11.969767 1542350 cri.go:89] found id: ""
	I1213 16:17:11.969794 1542350 logs.go:282] 0 containers: []
	W1213 16:17:11.969803 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:11.969809 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:11.969871 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:11.993969 1542350 cri.go:89] found id: ""
	I1213 16:17:11.993996 1542350 logs.go:282] 0 containers: []
	W1213 16:17:11.994005 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:11.994011 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:11.994089 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:12.029493 1542350 cri.go:89] found id: ""
	I1213 16:17:12.029521 1542350 logs.go:282] 0 containers: []
	W1213 16:17:12.029531 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:12.029543 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:12.029608 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:12.059180 1542350 cri.go:89] found id: ""
	I1213 16:17:12.059208 1542350 logs.go:282] 0 containers: []
	W1213 16:17:12.059217 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:12.059223 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:12.059283 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:12.087232 1542350 cri.go:89] found id: ""
	I1213 16:17:12.087261 1542350 logs.go:282] 0 containers: []
	W1213 16:17:12.087270 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:12.087276 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:12.087371 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:12.112813 1542350 cri.go:89] found id: ""
	I1213 16:17:12.112835 1542350 logs.go:282] 0 containers: []
	W1213 16:17:12.112844 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:12.112853 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:12.112864 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:12.138376 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:12.138408 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:12.166357 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:12.166387 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:12.222375 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:12.222410 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:12.239215 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:12.239247 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:12.308445 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:12.300806   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.301332   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.302968   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.303390   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.304404   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:12.300806   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.301332   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.302968   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.303390   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.304404   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:14.808692 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:14.819373 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:14.819444 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:14.852674 1542350 cri.go:89] found id: ""
	I1213 16:17:14.852703 1542350 logs.go:282] 0 containers: []
	W1213 16:17:14.852712 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:14.852728 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:14.852788 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:14.883668 1542350 cri.go:89] found id: ""
	I1213 16:17:14.883695 1542350 logs.go:282] 0 containers: []
	W1213 16:17:14.883704 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:14.883710 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:14.883767 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:14.911607 1542350 cri.go:89] found id: ""
	I1213 16:17:14.911630 1542350 logs.go:282] 0 containers: []
	W1213 16:17:14.911638 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:14.911644 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:14.911706 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:14.936933 1542350 cri.go:89] found id: ""
	I1213 16:17:14.936960 1542350 logs.go:282] 0 containers: []
	W1213 16:17:14.936970 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:14.936977 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:14.937035 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:14.962547 1542350 cri.go:89] found id: ""
	I1213 16:17:14.962570 1542350 logs.go:282] 0 containers: []
	W1213 16:17:14.962580 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:14.962586 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:14.962689 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:14.986795 1542350 cri.go:89] found id: ""
	I1213 16:17:14.986820 1542350 logs.go:282] 0 containers: []
	W1213 16:17:14.986836 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:14.986843 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:14.986903 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:15.033107 1542350 cri.go:89] found id: ""
	I1213 16:17:15.033185 1542350 logs.go:282] 0 containers: []
	W1213 16:17:15.033224 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:15.033257 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:15.033365 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:15.061981 1542350 cri.go:89] found id: ""
	I1213 16:17:15.062060 1542350 logs.go:282] 0 containers: []
	W1213 16:17:15.062093 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:15.062116 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:15.062143 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:15.118734 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:15.118772 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:15.135655 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:15.135685 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:15.203637 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:15.195493   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.196273   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.197861   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.198155   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.199758   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:15.195493   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.196273   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.197861   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.198155   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.199758   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:15.203658 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:15.203670 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:15.229691 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:15.229730 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:17.757141 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:17.767810 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:17.767883 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:17.795906 1542350 cri.go:89] found id: ""
	I1213 16:17:17.795930 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.795939 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:17.795945 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:17.796011 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:17.820499 1542350 cri.go:89] found id: ""
	I1213 16:17:17.820525 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.820534 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:17.820540 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:17.820597 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:17.852893 1542350 cri.go:89] found id: ""
	I1213 16:17:17.852922 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.852931 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:17.852936 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:17.852998 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:17.882522 1542350 cri.go:89] found id: ""
	I1213 16:17:17.882550 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.882559 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:17.882567 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:17.882625 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:17.910091 1542350 cri.go:89] found id: ""
	I1213 16:17:17.910119 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.910128 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:17.910133 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:17.910194 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:17.934842 1542350 cri.go:89] found id: ""
	I1213 16:17:17.934877 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.934886 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:17.934892 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:17.934957 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:17.959436 1542350 cri.go:89] found id: ""
	I1213 16:17:17.959470 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.959480 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:17.959491 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:17.959563 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:17.984392 1542350 cri.go:89] found id: ""
	I1213 16:17:17.984422 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.984431 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:17.984440 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:17.984452 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:18.039527 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:18.039566 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:18.055611 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:18.055637 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:18.119895 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:18.111397   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.112071   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.113661   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.114180   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.115834   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:18.111397   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.112071   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.113661   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.114180   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.115834   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:18.119920 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:18.119935 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:18.145247 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:18.145282 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:20.679491 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:20.690101 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:20.690172 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:20.715727 1542350 cri.go:89] found id: ""
	I1213 16:17:20.715753 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.715770 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:20.715780 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:20.715849 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:20.743470 1542350 cri.go:89] found id: ""
	I1213 16:17:20.743496 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.743504 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:20.743511 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:20.743570 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:20.768457 1542350 cri.go:89] found id: ""
	I1213 16:17:20.768480 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.768496 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:20.768503 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:20.768561 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:20.792618 1542350 cri.go:89] found id: ""
	I1213 16:17:20.792644 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.792653 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:20.792660 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:20.792718 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:20.817055 1542350 cri.go:89] found id: ""
	I1213 16:17:20.817077 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.817087 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:20.817093 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:20.817155 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:20.847328 1542350 cri.go:89] found id: ""
	I1213 16:17:20.847351 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.847360 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:20.847366 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:20.847428 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:20.885859 1542350 cri.go:89] found id: ""
	I1213 16:17:20.885882 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.885891 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:20.885898 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:20.885956 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:20.915753 1542350 cri.go:89] found id: ""
	I1213 16:17:20.915784 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.915794 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:20.915803 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:20.915815 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:20.970894 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:20.970934 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:20.986885 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:20.986910 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:21.055027 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:21.046462   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.046998   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.048753   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.049334   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.051047   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:21.046462   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.046998   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.048753   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.049334   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.051047   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:21.055049 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:21.055062 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:21.079833 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:21.079866 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:23.608166 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:23.619347 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:23.619414 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:23.649699 1542350 cri.go:89] found id: ""
	I1213 16:17:23.649721 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.649729 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:23.649736 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:23.649795 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:23.675224 1542350 cri.go:89] found id: ""
	I1213 16:17:23.675246 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.675255 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:23.675261 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:23.675349 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:23.700895 1542350 cri.go:89] found id: ""
	I1213 16:17:23.700918 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.700927 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:23.700933 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:23.700996 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:23.729110 1542350 cri.go:89] found id: ""
	I1213 16:17:23.729176 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.729191 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:23.729198 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:23.729257 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:23.753661 1542350 cri.go:89] found id: ""
	I1213 16:17:23.753688 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.753697 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:23.753703 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:23.753774 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:23.778169 1542350 cri.go:89] found id: ""
	I1213 16:17:23.778217 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.778227 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:23.778234 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:23.778301 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:23.802589 1542350 cri.go:89] found id: ""
	I1213 16:17:23.802622 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.802631 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:23.802637 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:23.802708 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:23.832514 1542350 cri.go:89] found id: ""
	I1213 16:17:23.832548 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.832558 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:23.832569 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:23.832582 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:23.917876 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:23.909157   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.909608   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.910836   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.911543   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.913354   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:23.909157   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.909608   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.910836   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.911543   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.913354   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:23.917899 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:23.917918 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:23.943509 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:23.943548 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:23.971452 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:23.971478 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:24.027358 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:24.027396 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:26.545810 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:26.556391 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:26.556463 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:26.580187 1542350 cri.go:89] found id: ""
	I1213 16:17:26.580210 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.580219 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:26.580239 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:26.580300 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:26.608397 1542350 cri.go:89] found id: ""
	I1213 16:17:26.608420 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.608429 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:26.608435 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:26.608496 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:26.636638 1542350 cri.go:89] found id: ""
	I1213 16:17:26.636661 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.636669 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:26.636675 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:26.636734 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:26.665248 1542350 cri.go:89] found id: ""
	I1213 16:17:26.665274 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.665283 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:26.665289 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:26.665365 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:26.695808 1542350 cri.go:89] found id: ""
	I1213 16:17:26.695835 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.695854 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:26.695861 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:26.695918 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:26.721653 1542350 cri.go:89] found id: ""
	I1213 16:17:26.721678 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.721687 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:26.721693 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:26.721751 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:26.750218 1542350 cri.go:89] found id: ""
	I1213 16:17:26.750241 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.750250 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:26.750256 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:26.750313 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:26.777036 1542350 cri.go:89] found id: ""
	I1213 16:17:26.777059 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.777068 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:26.777077 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:26.777088 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:26.833887 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:26.833929 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:26.851275 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:26.851303 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:26.934951 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:26.926741   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.927535   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.929160   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.929458   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.930955   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:26.926741   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.927535   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.929160   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.929458   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.930955   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:26.934973 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:26.934985 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:26.960388 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:26.960424 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:29.488577 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:29.499475 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:29.499551 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:29.524176 1542350 cri.go:89] found id: ""
	I1213 16:17:29.524202 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.524212 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:29.524219 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:29.524281 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:29.558368 1542350 cri.go:89] found id: ""
	I1213 16:17:29.558393 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.558408 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:29.558415 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:29.558504 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:29.589170 1542350 cri.go:89] found id: ""
	I1213 16:17:29.589197 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.589206 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:29.589212 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:29.589273 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:29.621623 1542350 cri.go:89] found id: ""
	I1213 16:17:29.621697 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.621722 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:29.621741 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:29.621828 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:29.651459 1542350 cri.go:89] found id: ""
	I1213 16:17:29.651534 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.651557 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:29.651584 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:29.651712 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:29.676637 1542350 cri.go:89] found id: ""
	I1213 16:17:29.676663 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.676673 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:29.676679 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:29.676752 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:29.701821 1542350 cri.go:89] found id: ""
	I1213 16:17:29.701845 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.701855 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:29.701861 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:29.701920 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:29.726528 1542350 cri.go:89] found id: ""
	I1213 16:17:29.726555 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.726564 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:29.726574 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:29.726585 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:29.781999 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:29.782035 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:29.798088 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:29.798116 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:29.881323 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:29.870998   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.871883   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.873746   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.874663   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.876377   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:29.870998   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.871883   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.873746   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.874663   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.876377   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:29.881348 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:29.881361 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:29.911425 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:29.911464 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:32.442588 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:32.453594 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:32.453664 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:32.479865 1542350 cri.go:89] found id: ""
	I1213 16:17:32.479893 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.479902 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:32.479909 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:32.479975 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:32.505131 1542350 cri.go:89] found id: ""
	I1213 16:17:32.505159 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.505168 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:32.505175 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:32.505239 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:32.529697 1542350 cri.go:89] found id: ""
	I1213 16:17:32.529723 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.529732 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:32.529738 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:32.529796 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:32.554812 1542350 cri.go:89] found id: ""
	I1213 16:17:32.554834 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.554850 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:32.554856 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:32.554915 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:32.582244 1542350 cri.go:89] found id: ""
	I1213 16:17:32.582270 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.582279 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:32.582286 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:32.582347 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:32.613711 1542350 cri.go:89] found id: ""
	I1213 16:17:32.613738 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.613747 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:32.613754 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:32.613818 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:32.642070 1542350 cri.go:89] found id: ""
	I1213 16:17:32.642097 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.642106 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:32.642112 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:32.642168 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:32.667382 1542350 cri.go:89] found id: ""
	I1213 16:17:32.667406 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.667415 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:32.667424 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:32.667436 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:32.683777 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:32.683808 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:32.750802 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:32.740355   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.741255   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.742960   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.743298   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.746651   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:32.740355   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.741255   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.742960   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.743298   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.746651   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:32.750824 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:32.750838 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:32.776516 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:32.776551 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:32.809331 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:32.809358 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:35.374938 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:35.387203 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:35.387276 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:35.412099 1542350 cri.go:89] found id: ""
	I1213 16:17:35.412124 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.412133 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:35.412139 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:35.412195 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:35.436994 1542350 cri.go:89] found id: ""
	I1213 16:17:35.437031 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.437040 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:35.437047 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:35.437115 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:35.461531 1542350 cri.go:89] found id: ""
	I1213 16:17:35.461554 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.461562 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:35.461568 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:35.461627 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:35.486070 1542350 cri.go:89] found id: ""
	I1213 16:17:35.486095 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.486105 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:35.486118 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:35.486176 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:35.515476 1542350 cri.go:89] found id: ""
	I1213 16:17:35.515501 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.515510 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:35.515516 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:35.515576 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:35.545886 1542350 cri.go:89] found id: ""
	I1213 16:17:35.545959 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.545995 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:35.546020 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:35.546110 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:35.575465 1542350 cri.go:89] found id: ""
	I1213 16:17:35.575489 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.575498 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:35.575504 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:35.575563 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:35.607235 1542350 cri.go:89] found id: ""
	I1213 16:17:35.607264 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.607273 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:35.607282 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:35.607294 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:35.671811 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:35.671850 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:35.687939 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:35.687972 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:35.751714 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:35.742668   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.743226   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.744917   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.745517   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.747275   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:35.742668   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.743226   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.744917   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.745517   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.747275   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:35.751733 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:35.751746 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:35.777517 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:35.777554 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:38.308841 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:38.319569 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:38.319645 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:38.344249 1542350 cri.go:89] found id: ""
	I1213 16:17:38.344276 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.344285 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:38.344291 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:38.344349 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:38.368637 1542350 cri.go:89] found id: ""
	I1213 16:17:38.368666 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.368676 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:38.368682 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:38.368746 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:38.397310 1542350 cri.go:89] found id: ""
	I1213 16:17:38.397335 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.397344 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:38.397350 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:38.397409 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:38.426892 1542350 cri.go:89] found id: ""
	I1213 16:17:38.426967 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.426989 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:38.427008 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:38.427091 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:38.451400 1542350 cri.go:89] found id: ""
	I1213 16:17:38.451423 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.451432 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:38.451437 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:38.451500 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:38.476411 1542350 cri.go:89] found id: ""
	I1213 16:17:38.476433 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.476441 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:38.476448 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:38.476506 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:38.502060 1542350 cri.go:89] found id: ""
	I1213 16:17:38.502083 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.502092 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:38.502098 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:38.502158 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:38.527156 1542350 cri.go:89] found id: ""
	I1213 16:17:38.527217 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.527240 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:38.527264 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:38.527289 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:38.583123 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:38.583161 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:38.606934 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:38.607014 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:38.678774 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:38.669944   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.670742   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.672406   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.672726   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.674153   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:38.669944   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.670742   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.672406   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.672726   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.674153   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:38.678794 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:38.678806 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:38.703623 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:38.703656 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:41.235499 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:41.246098 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:41.246199 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:41.272817 1542350 cri.go:89] found id: ""
	I1213 16:17:41.272884 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.272907 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:41.272921 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:41.272995 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:41.297573 1542350 cri.go:89] found id: ""
	I1213 16:17:41.297599 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.297608 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:41.297614 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:41.297722 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:41.325595 1542350 cri.go:89] found id: ""
	I1213 16:17:41.325663 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.325695 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:41.325708 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:41.325784 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:41.350495 1542350 cri.go:89] found id: ""
	I1213 16:17:41.350519 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.350528 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:41.350534 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:41.350593 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:41.374833 1542350 cri.go:89] found id: ""
	I1213 16:17:41.374860 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.374869 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:41.374874 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:41.374931 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:41.400881 1542350 cri.go:89] found id: ""
	I1213 16:17:41.400911 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.400920 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:41.400926 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:41.400983 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:41.425159 1542350 cri.go:89] found id: ""
	I1213 16:17:41.425182 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.425191 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:41.425197 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:41.425255 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:41.449690 1542350 cri.go:89] found id: ""
	I1213 16:17:41.449765 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.449788 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:41.449808 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:41.449845 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:41.465414 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:41.465441 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:41.531758 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:41.522350   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.523854   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.524927   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.526433   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.526949   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:41.522350   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.523854   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.524927   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.526433   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.526949   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:41.531782 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:41.531795 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:41.557072 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:41.557104 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:41.589367 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:41.589397 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:44.161155 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:44.173267 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:44.173342 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:44.202655 1542350 cri.go:89] found id: ""
	I1213 16:17:44.202682 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.202692 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:44.202699 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:44.202758 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:44.227871 1542350 cri.go:89] found id: ""
	I1213 16:17:44.227897 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.227905 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:44.227911 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:44.227972 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:44.253446 1542350 cri.go:89] found id: ""
	I1213 16:17:44.253473 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.253481 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:44.253487 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:44.253543 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:44.279358 1542350 cri.go:89] found id: ""
	I1213 16:17:44.279383 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.279392 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:44.279398 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:44.279464 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:44.303249 1542350 cri.go:89] found id: ""
	I1213 16:17:44.303275 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.303284 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:44.303344 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:44.303410 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:44.327446 1542350 cri.go:89] found id: ""
	I1213 16:17:44.327471 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.327480 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:44.327486 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:44.327546 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:44.353767 1542350 cri.go:89] found id: ""
	I1213 16:17:44.353793 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.353802 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:44.353808 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:44.353865 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:44.382033 1542350 cri.go:89] found id: ""
	I1213 16:17:44.382060 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.382068 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:44.382078 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:44.382089 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:44.436599 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:44.436634 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:44.452268 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:44.452298 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:44.515099 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:44.507045   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.507744   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.509208   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.509703   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.511153   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:44.507045   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.507744   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.509208   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.509703   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.511153   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:44.515122 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:44.515134 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:44.540023 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:44.540059 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:47.069691 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:47.080543 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:47.080615 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:47.114986 1542350 cri.go:89] found id: ""
	I1213 16:17:47.115062 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.115085 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:47.115103 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:47.115194 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:47.148767 1542350 cri.go:89] found id: ""
	I1213 16:17:47.148840 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.148850 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:47.148857 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:47.148931 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:47.174407 1542350 cri.go:89] found id: ""
	I1213 16:17:47.174436 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.174445 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:47.174452 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:47.175791 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:47.207990 1542350 cri.go:89] found id: ""
	I1213 16:17:47.208024 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.208034 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:47.208041 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:47.208115 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:47.232910 1542350 cri.go:89] found id: ""
	I1213 16:17:47.232938 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.232947 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:47.232953 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:47.233015 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:47.256927 1542350 cri.go:89] found id: ""
	I1213 16:17:47.256952 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.256961 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:47.256967 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:47.257049 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:47.285254 1542350 cri.go:89] found id: ""
	I1213 16:17:47.285281 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.285290 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:47.285296 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:47.285356 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:47.309997 1542350 cri.go:89] found id: ""
	I1213 16:17:47.310027 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.310037 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:47.310046 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:47.310060 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:47.326038 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:47.326073 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:47.390775 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:47.383176   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.383650   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.385126   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.385506   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.386933   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:47.383176   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.383650   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.385126   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.385506   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.386933   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:47.390796 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:47.390809 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:47.415331 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:47.415362 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:47.442477 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:47.442503 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:50.000902 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:50.015948 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:50.016030 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:50.046794 1542350 cri.go:89] found id: ""
	I1213 16:17:50.046819 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.046827 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:50.046834 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:50.046890 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:50.073072 1542350 cri.go:89] found id: ""
	I1213 16:17:50.073106 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.073116 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:50.073124 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:50.073186 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:50.111358 1542350 cri.go:89] found id: ""
	I1213 16:17:50.111384 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.111393 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:50.111403 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:50.111468 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:50.141482 1542350 cri.go:89] found id: ""
	I1213 16:17:50.141510 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.141519 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:50.141525 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:50.141584 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:50.168684 1542350 cri.go:89] found id: ""
	I1213 16:17:50.168711 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.168720 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:50.168727 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:50.168806 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:50.194609 1542350 cri.go:89] found id: ""
	I1213 16:17:50.194633 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.194642 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:50.194648 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:50.194708 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:50.220707 1542350 cri.go:89] found id: ""
	I1213 16:17:50.220732 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.220741 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:50.220746 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:50.220810 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:50.245930 1542350 cri.go:89] found id: ""
	I1213 16:17:50.245956 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.245965 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:50.245975 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:50.245987 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:50.301111 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:50.301147 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:50.317024 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:50.317051 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:50.379354 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:50.370376   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.371063   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.372767   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.373333   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.375036   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:50.370376   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.371063   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.372767   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.373333   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.375036   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:50.379375 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:50.379388 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:50.403891 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:50.403925 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:52.933071 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:52.944075 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:52.944148 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:52.969292 1542350 cri.go:89] found id: ""
	I1213 16:17:52.969318 1542350 logs.go:282] 0 containers: []
	W1213 16:17:52.969327 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:52.969333 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:52.969393 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:52.997688 1542350 cri.go:89] found id: ""
	I1213 16:17:52.997717 1542350 logs.go:282] 0 containers: []
	W1213 16:17:52.997727 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:52.997733 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:52.997795 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:53.024102 1542350 cri.go:89] found id: ""
	I1213 16:17:53.024134 1542350 logs.go:282] 0 containers: []
	W1213 16:17:53.024144 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:53.024150 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:53.024214 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:53.054126 1542350 cri.go:89] found id: ""
	I1213 16:17:53.054149 1542350 logs.go:282] 0 containers: []
	W1213 16:17:53.054159 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:53.054165 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:53.054227 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:53.078840 1542350 cri.go:89] found id: ""
	I1213 16:17:53.078918 1542350 logs.go:282] 0 containers: []
	W1213 16:17:53.078940 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:53.078958 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:53.079041 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:53.134282 1542350 cri.go:89] found id: ""
	I1213 16:17:53.134313 1542350 logs.go:282] 0 containers: []
	W1213 16:17:53.134326 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:53.134332 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:53.134401 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:53.170263 1542350 cri.go:89] found id: ""
	I1213 16:17:53.170287 1542350 logs.go:282] 0 containers: []
	W1213 16:17:53.170296 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:53.170302 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:53.170366 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:53.195555 1542350 cri.go:89] found id: ""
	I1213 16:17:53.195578 1542350 logs.go:282] 0 containers: []
	W1213 16:17:53.195587 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:53.195596 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:53.195612 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:53.221475 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:53.221510 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:53.256145 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:53.256172 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:53.312142 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:53.312178 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:53.328755 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:53.328784 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:53.392981 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:53.384949   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.385331   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.386801   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.387094   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.388517   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:53.384949   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.385331   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.386801   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.387094   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.388517   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:55.894678 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:55.905837 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:55.905910 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:55.931137 1542350 cri.go:89] found id: ""
	I1213 16:17:55.931159 1542350 logs.go:282] 0 containers: []
	W1213 16:17:55.931168 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:55.931175 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:55.931236 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:55.955775 1542350 cri.go:89] found id: ""
	I1213 16:17:55.955801 1542350 logs.go:282] 0 containers: []
	W1213 16:17:55.955810 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:55.955817 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:55.955877 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:55.981227 1542350 cri.go:89] found id: ""
	I1213 16:17:55.981253 1542350 logs.go:282] 0 containers: []
	W1213 16:17:55.981262 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:55.981268 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:55.981329 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:56.008866 1542350 cri.go:89] found id: ""
	I1213 16:17:56.008892 1542350 logs.go:282] 0 containers: []
	W1213 16:17:56.008902 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:56.008909 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:56.008975 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:56.035606 1542350 cri.go:89] found id: ""
	I1213 16:17:56.035635 1542350 logs.go:282] 0 containers: []
	W1213 16:17:56.035644 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:56.035650 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:56.035712 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:56.061753 1542350 cri.go:89] found id: ""
	I1213 16:17:56.061780 1542350 logs.go:282] 0 containers: []
	W1213 16:17:56.061789 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:56.061795 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:56.061858 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:56.099036 1542350 cri.go:89] found id: ""
	I1213 16:17:56.099065 1542350 logs.go:282] 0 containers: []
	W1213 16:17:56.099074 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:56.099081 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:56.099142 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:56.133464 1542350 cri.go:89] found id: ""
	I1213 16:17:56.133491 1542350 logs.go:282] 0 containers: []
	W1213 16:17:56.133500 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:56.133510 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:56.133522 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:56.155287 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:56.155412 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:56.223561 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:56.214782   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.215583   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.217326   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.217944   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.219582   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:56.214782   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.215583   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.217326   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.217944   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.219582   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:56.223629 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:56.223650 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:56.249923 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:56.249965 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:56.280662 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:56.280692 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:58.836837 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:58.848594 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:58.848659 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:58.881904 1542350 cri.go:89] found id: ""
	I1213 16:17:58.881927 1542350 logs.go:282] 0 containers: []
	W1213 16:17:58.881935 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:58.881941 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:58.882001 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:58.917932 1542350 cri.go:89] found id: ""
	I1213 16:17:58.917954 1542350 logs.go:282] 0 containers: []
	W1213 16:17:58.917963 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:58.917969 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:58.918028 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:58.945580 1542350 cri.go:89] found id: ""
	I1213 16:17:58.945653 1542350 logs.go:282] 0 containers: []
	W1213 16:17:58.945668 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:58.945676 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:58.945753 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:58.971398 1542350 cri.go:89] found id: ""
	I1213 16:17:58.971424 1542350 logs.go:282] 0 containers: []
	W1213 16:17:58.971434 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:58.971440 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:58.971503 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:59.001302 1542350 cri.go:89] found id: ""
	I1213 16:17:59.001329 1542350 logs.go:282] 0 containers: []
	W1213 16:17:59.001339 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:59.001345 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:59.001409 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:59.028353 1542350 cri.go:89] found id: ""
	I1213 16:17:59.028379 1542350 logs.go:282] 0 containers: []
	W1213 16:17:59.028388 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:59.028394 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:59.028470 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:59.052548 1542350 cri.go:89] found id: ""
	I1213 16:17:59.052577 1542350 logs.go:282] 0 containers: []
	W1213 16:17:59.052586 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:59.052593 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:59.052653 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:59.077515 1542350 cri.go:89] found id: ""
	I1213 16:17:59.077541 1542350 logs.go:282] 0 containers: []
	W1213 16:17:59.077550 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:59.077560 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:59.077571 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:59.141173 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:59.141249 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:59.158291 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:59.158371 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:59.225799 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:59.216416   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.217255   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.219459   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.220025   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.221754   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:59.216416   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.217255   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.219459   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.220025   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.221754   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:59.225867 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:59.225890 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:59.251561 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:59.251597 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:01.784053 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:01.795325 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:01.795393 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:01.819579 1542350 cri.go:89] found id: ""
	I1213 16:18:01.819605 1542350 logs.go:282] 0 containers: []
	W1213 16:18:01.819615 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:01.819622 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:01.819683 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:01.857561 1542350 cri.go:89] found id: ""
	I1213 16:18:01.857588 1542350 logs.go:282] 0 containers: []
	W1213 16:18:01.857597 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:01.857604 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:01.857668 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:01.893605 1542350 cri.go:89] found id: ""
	I1213 16:18:01.893633 1542350 logs.go:282] 0 containers: []
	W1213 16:18:01.893642 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:01.893650 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:01.893706 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:01.931676 1542350 cri.go:89] found id: ""
	I1213 16:18:01.931783 1542350 logs.go:282] 0 containers: []
	W1213 16:18:01.931803 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:01.931812 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:01.931935 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:01.959175 1542350 cri.go:89] found id: ""
	I1213 16:18:01.959249 1542350 logs.go:282] 0 containers: []
	W1213 16:18:01.959272 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:01.959292 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:01.959398 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:01.984753 1542350 cri.go:89] found id: ""
	I1213 16:18:01.984784 1542350 logs.go:282] 0 containers: []
	W1213 16:18:01.984794 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:01.984800 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:01.984865 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:02.016830 1542350 cri.go:89] found id: ""
	I1213 16:18:02.016860 1542350 logs.go:282] 0 containers: []
	W1213 16:18:02.016870 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:02.016876 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:02.016939 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:02.042747 1542350 cri.go:89] found id: ""
	I1213 16:18:02.042776 1542350 logs.go:282] 0 containers: []
	W1213 16:18:02.042785 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:02.042794 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:02.042805 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:02.101057 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:02.101093 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:02.118948 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:02.118972 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:02.188051 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:02.179066   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.180018   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.180758   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.182317   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.182771   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:02.179066   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.180018   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.180758   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.182317   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.182771   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:02.188077 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:02.188091 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:02.214276 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:02.214316 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:04.742630 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:04.753656 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:04.753725 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:04.779281 1542350 cri.go:89] found id: ""
	I1213 16:18:04.779338 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.779349 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:04.779355 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:04.779418 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:04.806060 1542350 cri.go:89] found id: ""
	I1213 16:18:04.806099 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.806108 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:04.806114 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:04.806195 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:04.831390 1542350 cri.go:89] found id: ""
	I1213 16:18:04.831416 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.831425 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:04.831432 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:04.831501 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:04.865636 1542350 cri.go:89] found id: ""
	I1213 16:18:04.865663 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.865673 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:04.865680 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:04.865746 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:04.893812 1542350 cri.go:89] found id: ""
	I1213 16:18:04.893836 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.893845 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:04.893851 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:04.893916 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:04.922033 1542350 cri.go:89] found id: ""
	I1213 16:18:04.922062 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.922071 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:04.922077 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:04.922135 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:04.952026 1542350 cri.go:89] found id: ""
	I1213 16:18:04.952052 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.952061 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:04.952068 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:04.952129 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:04.979878 1542350 cri.go:89] found id: ""
	I1213 16:18:04.979901 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.979910 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:04.979919 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:04.979931 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:05.038448 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:05.038485 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:05.055056 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:05.055086 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:05.138791 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:05.128391   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.129071   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.130643   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.131070   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.134791   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:05.128391   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.129071   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.130643   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.131070   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.134791   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:05.138815 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:05.138828 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:05.170511 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:05.170549 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:07.701516 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:07.711811 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:07.711881 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:07.737115 1542350 cri.go:89] found id: ""
	I1213 16:18:07.737139 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.737148 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:07.737154 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:07.737216 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:07.761282 1542350 cri.go:89] found id: ""
	I1213 16:18:07.761305 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.761313 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:07.761319 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:07.761375 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:07.788777 1542350 cri.go:89] found id: ""
	I1213 16:18:07.788804 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.788813 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:07.788828 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:07.788893 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:07.813606 1542350 cri.go:89] found id: ""
	I1213 16:18:07.813633 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.813642 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:07.813650 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:07.813762 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:07.846070 1542350 cri.go:89] found id: ""
	I1213 16:18:07.846100 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.846109 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:07.846115 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:07.846178 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:07.877868 1542350 cri.go:89] found id: ""
	I1213 16:18:07.877894 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.877903 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:07.877909 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:07.877978 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:07.906297 1542350 cri.go:89] found id: ""
	I1213 16:18:07.906322 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.906331 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:07.906337 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:07.906411 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:07.935165 1542350 cri.go:89] found id: ""
	I1213 16:18:07.935191 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.935200 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:07.935209 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:07.935221 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:07.990632 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:07.990666 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:08.006620 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:08.006668 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:08.074292 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:08.065222   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.066181   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.067860   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.068200   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.069739   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:08.065222   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.066181   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.067860   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.068200   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.069739   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:08.074313 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:08.074338 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:08.103200 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:08.103236 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:10.643571 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:10.654051 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:10.654120 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:10.678184 1542350 cri.go:89] found id: ""
	I1213 16:18:10.678213 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.678222 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:10.678229 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:10.678286 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:10.714102 1542350 cri.go:89] found id: ""
	I1213 16:18:10.714129 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.714137 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:10.714143 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:10.714204 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:10.738091 1542350 cri.go:89] found id: ""
	I1213 16:18:10.738114 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.738123 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:10.738129 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:10.738187 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:10.762969 1542350 cri.go:89] found id: ""
	I1213 16:18:10.762996 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.763005 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:10.763010 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:10.763068 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:10.788695 1542350 cri.go:89] found id: ""
	I1213 16:18:10.788718 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.788726 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:10.788732 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:10.788790 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:10.813304 1542350 cri.go:89] found id: ""
	I1213 16:18:10.813331 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.813339 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:10.813346 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:10.813404 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:10.840988 1542350 cri.go:89] found id: ""
	I1213 16:18:10.841013 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.841022 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:10.841028 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:10.841085 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:10.872923 1542350 cri.go:89] found id: ""
	I1213 16:18:10.872947 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.872957 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:10.872966 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:10.872978 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:10.913313 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:10.913342 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:10.970044 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:10.970079 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:10.986369 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:10.986399 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:11.056440 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:11.047528   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.048477   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.050273   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.050853   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.052407   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:11.047528   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.048477   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.050273   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.050853   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.052407   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:11.056461 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:11.056474 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:13.582630 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:13.593495 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:13.593570 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:13.618406 1542350 cri.go:89] found id: ""
	I1213 16:18:13.618429 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.618438 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:13.618444 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:13.618503 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:13.643366 1542350 cri.go:89] found id: ""
	I1213 16:18:13.643392 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.643401 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:13.643407 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:13.643470 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:13.668878 1542350 cri.go:89] found id: ""
	I1213 16:18:13.668903 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.668912 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:13.668918 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:13.668976 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:13.694282 1542350 cri.go:89] found id: ""
	I1213 16:18:13.694309 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.694318 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:13.694324 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:13.694383 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:13.722288 1542350 cri.go:89] found id: ""
	I1213 16:18:13.722318 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.722326 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:13.722332 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:13.722391 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:13.749131 1542350 cri.go:89] found id: ""
	I1213 16:18:13.749156 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.749165 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:13.749177 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:13.749234 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:13.772877 1542350 cri.go:89] found id: ""
	I1213 16:18:13.772905 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.772915 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:13.772924 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:13.773024 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:13.797195 1542350 cri.go:89] found id: ""
	I1213 16:18:13.797222 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.797232 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:13.797241 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:13.797253 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:13.875404 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:13.862729   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.863769   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.867326   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.869105   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.869716   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:13.862729   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.863769   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.867326   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.869105   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.869716   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:13.875426 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:13.875439 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:13.907083 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:13.907122 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:13.940383 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:13.940412 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:13.999033 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:13.999073 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:16.517512 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:16.531616 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:16.531687 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:16.555921 1542350 cri.go:89] found id: ""
	I1213 16:18:16.555944 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.555952 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:16.555958 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:16.556017 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:16.585501 1542350 cri.go:89] found id: ""
	I1213 16:18:16.585523 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.585532 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:16.585538 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:16.585597 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:16.609776 1542350 cri.go:89] found id: ""
	I1213 16:18:16.609800 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.609810 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:16.609815 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:16.609874 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:16.633727 1542350 cri.go:89] found id: ""
	I1213 16:18:16.633801 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.633828 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:16.633847 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:16.633919 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:16.663010 1542350 cri.go:89] found id: ""
	I1213 16:18:16.663034 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.663042 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:16.663048 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:16.663104 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:16.689483 1542350 cri.go:89] found id: ""
	I1213 16:18:16.689506 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.689514 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:16.689521 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:16.689579 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:16.713920 1542350 cri.go:89] found id: ""
	I1213 16:18:16.713946 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.713955 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:16.713963 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:16.714023 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:16.739270 1542350 cri.go:89] found id: ""
	I1213 16:18:16.739297 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.739366 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:16.739377 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:16.739391 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:16.805237 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:16.796686   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.797427   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.799164   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.799670   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.801345   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:16.796686   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.797427   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.799164   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.799670   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.801345   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:16.805260 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:16.805272 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:16.830391 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:16.830421 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:16.875174 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:16.875203 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:16.940670 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:16.940707 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:19.457858 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:19.469305 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:19.469382 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:19.494702 1542350 cri.go:89] found id: ""
	I1213 16:18:19.494728 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.494739 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:19.494745 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:19.494805 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:19.526787 1542350 cri.go:89] found id: ""
	I1213 16:18:19.526811 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.526820 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:19.526826 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:19.526892 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:19.553929 1542350 cri.go:89] found id: ""
	I1213 16:18:19.553952 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.553961 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:19.553967 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:19.554025 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:19.578994 1542350 cri.go:89] found id: ""
	I1213 16:18:19.579021 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.579029 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:19.579036 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:19.579094 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:19.605160 1542350 cri.go:89] found id: ""
	I1213 16:18:19.605184 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.605202 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:19.605209 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:19.605271 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:19.629853 1542350 cri.go:89] found id: ""
	I1213 16:18:19.629880 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.629889 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:19.629896 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:19.629963 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:19.654551 1542350 cri.go:89] found id: ""
	I1213 16:18:19.654578 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.654588 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:19.654594 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:19.654674 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:19.679386 1542350 cri.go:89] found id: ""
	I1213 16:18:19.679410 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.679420 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:19.679429 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:19.679440 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:19.704792 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:19.704824 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:19.733848 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:19.733877 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:19.789321 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:19.789357 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:19.805414 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:19.805442 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:19.893754 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:19.884877   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.886081   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.886716   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.888200   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.888520   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:19.884877   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.886081   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.886716   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.888200   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.888520   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:22.394654 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:22.408580 1542350 out.go:203] 
	W1213 16:18:22.411606 1542350 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 16:18:22.411646 1542350 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 16:18:22.411657 1542350 out.go:285] * Related issues:
	* Related issues:
	W1213 16:18:22.411669 1542350 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1213 16:18:22.411682 1542350 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1213 16:18:22.414454 1542350 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p newest-cni-526531 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 105
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-526531
E1213 16:18:23.672123 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:244: (dbg) docker inspect newest-cni-526531:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54",
	        "Created": "2025-12-13T16:02:15.548035148Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1542480,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T16:12:14.158493479Z",
	            "FinishedAt": "2025-12-13T16:12:12.79865571Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54/hostname",
	        "HostsPath": "/var/lib/docker/containers/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54/hosts",
	        "LogPath": "/var/lib/docker/containers/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54-json.log",
	        "Name": "/newest-cni-526531",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-526531:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-526531",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54",
	                "LowerDir": "/var/lib/docker/overlay2/3024c4b0e975ebb8657bae012179f06255ab85cd84ba26121505b6c54622bc5d-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3024c4b0e975ebb8657bae012179f06255ab85cd84ba26121505b6c54622bc5d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3024c4b0e975ebb8657bae012179f06255ab85cd84ba26121505b6c54622bc5d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3024c4b0e975ebb8657bae012179f06255ab85cd84ba26121505b6c54622bc5d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-526531",
	                "Source": "/var/lib/docker/volumes/newest-cni-526531/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-526531",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-526531",
	                "name.minikube.sigs.k8s.io": "newest-cni-526531",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "57c40ce56d621d0f69c7bac6d3cb56a638b53bb82fd302b1930b9f51563e995b",
	            "SandboxKey": "/var/run/docker/netns/57c40ce56d62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34233"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34234"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34237"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34235"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34236"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-526531": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:43:0b:15:7e:21",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ae0d89b977ec0aa4cc17943d84decbf5f3cf47ff39573e4d4fdb9e9873e2828c",
	                    "EndpointID": "4d19fec2228064ef379084c28bbbd96c0fa36a4142ac70319780a70953fdc4e8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-526531",
	                        "dd2af60ccebf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-526531 -n newest-cni-526531
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-526531 -n newest-cni-526531: exit status 2 (373.109887ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-526531 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-526531 logs -n 25: (1.581047965s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ image   │ embed-certs-270324 image list --format=json                                                                                                                                                                                                                │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ pause   │ -p embed-certs-270324 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ unpause │ -p embed-certs-270324 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p embed-certs-270324                                                                                                                                                                                                                                      │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p embed-certs-270324                                                                                                                                                                                                                                      │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p disable-driver-mounts-614298                                                                                                                                                                                                                            │ disable-driver-mounts-614298 │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ start   │ -p default-k8s-diff-port-946932 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 16:00 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-946932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:00 UTC │ 13 Dec 25 16:00 UTC │
	│ stop    │ -p default-k8s-diff-port-946932 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:00 UTC │ 13 Dec 25 16:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-946932 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:01 UTC │ 13 Dec 25 16:01 UTC │
	│ start   │ -p default-k8s-diff-port-946932 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:01 UTC │ 13 Dec 25 16:01 UTC │
	│ image   │ default-k8s-diff-port-946932 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ pause   │ -p default-k8s-diff-port-946932 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ unpause │ -p default-k8s-diff-port-946932 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ delete  │ -p default-k8s-diff-port-946932                                                                                                                                                                                                                            │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ delete  │ -p default-k8s-diff-port-946932                                                                                                                                                                                                                            │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ start   │ -p newest-cni-526531 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-439544 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │                     │
	│ stop    │ -p no-preload-439544 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:04 UTC │ 13 Dec 25 16:04 UTC │
	│ addons  │ enable dashboard -p no-preload-439544 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:04 UTC │ 13 Dec 25 16:04 UTC │
	│ start   │ -p no-preload-439544 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:04 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-526531 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:10 UTC │                     │
	│ stop    │ -p newest-cni-526531 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:12 UTC │ 13 Dec 25 16:12 UTC │
	│ addons  │ enable dashboard -p newest-cni-526531 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:12 UTC │ 13 Dec 25 16:12 UTC │
	│ start   │ -p newest-cni-526531 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:12 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 16:12:13
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 16:12:13.872500 1542350 out.go:360] Setting OutFile to fd 1 ...
	I1213 16:12:13.872721 1542350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:12:13.872749 1542350 out.go:374] Setting ErrFile to fd 2...
	I1213 16:12:13.872769 1542350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:12:13.873083 1542350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 16:12:13.873513 1542350 out.go:368] Setting JSON to false
	I1213 16:12:13.874453 1542350 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":28483,"bootTime":1765613851,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 16:12:13.874604 1542350 start.go:143] virtualization:  
	I1213 16:12:13.877765 1542350 out.go:179] * [newest-cni-526531] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 16:12:13.881549 1542350 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 16:12:13.881619 1542350 notify.go:221] Checking for updates...
	I1213 16:12:13.887324 1542350 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 16:12:13.890274 1542350 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:12:13.893162 1542350 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 16:12:13.896033 1542350 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 16:12:13.898948 1542350 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 16:12:13.902364 1542350 config.go:182] Loaded profile config "newest-cni-526531": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:12:13.902980 1542350 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 16:12:13.935990 1542350 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 16:12:13.936167 1542350 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:12:14.000058 1542350 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:12:13.991072746 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:12:14.000167 1542350 docker.go:319] overlay module found
	I1213 16:12:14.005438 1542350 out.go:179] * Using the docker driver based on existing profile
	I1213 16:12:14.008564 1542350 start.go:309] selected driver: docker
	I1213 16:12:14.008597 1542350 start.go:927] validating driver "docker" against &{Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:12:14.008696 1542350 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 16:12:14.009457 1542350 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:12:14.067852 1542350 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:12:14.058134833 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:12:14.068237 1542350 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 16:12:14.068271 1542350 cni.go:84] Creating CNI manager for ""
	I1213 16:12:14.068329 1542350 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:12:14.068382 1542350 start.go:353] cluster config:
	{Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:12:14.071643 1542350 out.go:179] * Starting "newest-cni-526531" primary control-plane node in "newest-cni-526531" cluster
	I1213 16:12:14.074436 1542350 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 16:12:14.077449 1542350 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 16:12:14.080394 1542350 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:12:14.080442 1542350 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 16:12:14.080452 1542350 cache.go:65] Caching tarball of preloaded images
	I1213 16:12:14.080507 1542350 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 16:12:14.080564 1542350 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 16:12:14.080575 1542350 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 16:12:14.080690 1542350 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/config.json ...
	I1213 16:12:14.101187 1542350 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 16:12:14.101205 1542350 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 16:12:14.101219 1542350 cache.go:243] Successfully downloaded all kic artifacts
	I1213 16:12:14.101249 1542350 start.go:360] acquireMachinesLock for newest-cni-526531: {Name:mk8328ad899404812480e264edd6e13cbbd26230 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:12:14.101300 1542350 start.go:364] duration metric: took 35.502µs to acquireMachinesLock for "newest-cni-526531"
	I1213 16:12:14.101319 1542350 start.go:96] Skipping create...Using existing machine configuration
	I1213 16:12:14.101324 1542350 fix.go:54] fixHost starting: 
	I1213 16:12:14.101579 1542350 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:12:14.120089 1542350 fix.go:112] recreateIfNeeded on newest-cni-526531: state=Stopped err=<nil>
	W1213 16:12:14.120117 1542350 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 16:12:14.123566 1542350 out.go:252] * Restarting existing docker container for "newest-cni-526531" ...
	I1213 16:12:14.123658 1542350 cli_runner.go:164] Run: docker start newest-cni-526531
	I1213 16:12:14.407857 1542350 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:12:14.431483 1542350 kic.go:430] container "newest-cni-526531" state is running.
	I1213 16:12:14.431880 1542350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:12:14.455073 1542350 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/config.json ...
	I1213 16:12:14.455509 1542350 machine.go:94] provisionDockerMachine start ...
	I1213 16:12:14.455579 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:14.483076 1542350 main.go:143] libmachine: Using SSH client type: native
	I1213 16:12:14.483636 1542350 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34233 <nil> <nil>}
	I1213 16:12:14.483652 1542350 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 16:12:14.484350 1542350 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 16:12:17.634930 1542350 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-526531
	
	I1213 16:12:17.634954 1542350 ubuntu.go:182] provisioning hostname "newest-cni-526531"
	I1213 16:12:17.635019 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:17.654681 1542350 main.go:143] libmachine: Using SSH client type: native
	I1213 16:12:17.654996 1542350 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34233 <nil> <nil>}
	I1213 16:12:17.655008 1542350 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-526531 && echo "newest-cni-526531" | sudo tee /etc/hostname
	I1213 16:12:17.812861 1542350 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-526531
	
	I1213 16:12:17.812938 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:17.830348 1542350 main.go:143] libmachine: Using SSH client type: native
	I1213 16:12:17.830658 1542350 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34233 <nil> <nil>}
	I1213 16:12:17.830675 1542350 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-526531' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-526531/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-526531' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 16:12:17.987587 1542350 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 16:12:17.987621 1542350 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 16:12:17.987641 1542350 ubuntu.go:190] setting up certificates
	I1213 16:12:17.987659 1542350 provision.go:84] configureAuth start
	I1213 16:12:17.987726 1542350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:12:18.011145 1542350 provision.go:143] copyHostCerts
	I1213 16:12:18.011230 1542350 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 16:12:18.011240 1542350 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 16:12:18.011430 1542350 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 16:12:18.011569 1542350 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 16:12:18.011584 1542350 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 16:12:18.011623 1542350 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 16:12:18.011690 1542350 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 16:12:18.011698 1542350 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 16:12:18.011724 1542350 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 16:12:18.011776 1542350 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.newest-cni-526531 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-526531]
	I1213 16:12:18.508738 1542350 provision.go:177] copyRemoteCerts
	I1213 16:12:18.508811 1542350 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 16:12:18.508861 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:18.526422 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:18.636742 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 16:12:18.655155 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 16:12:18.674107 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 16:12:18.692128 1542350 provision.go:87] duration metric: took 704.439864ms to configureAuth
	I1213 16:12:18.692158 1542350 ubuntu.go:206] setting minikube options for container-runtime
	I1213 16:12:18.692373 1542350 config.go:182] Loaded profile config "newest-cni-526531": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:12:18.692387 1542350 machine.go:97] duration metric: took 4.236863655s to provisionDockerMachine
	I1213 16:12:18.692395 1542350 start.go:293] postStartSetup for "newest-cni-526531" (driver="docker")
	I1213 16:12:18.692409 1542350 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 16:12:18.692476 1542350 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 16:12:18.692523 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:18.710444 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:18.815900 1542350 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 16:12:18.819552 1542350 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 16:12:18.819582 1542350 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 16:12:18.819595 1542350 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 16:12:18.819651 1542350 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 16:12:18.819740 1542350 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 16:12:18.819846 1542350 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 16:12:18.827635 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:12:18.845967 1542350 start.go:296] duration metric: took 153.553828ms for postStartSetup
	I1213 16:12:18.846048 1542350 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 16:12:18.846103 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:18.863404 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:18.964333 1542350 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 16:12:18.969276 1542350 fix.go:56] duration metric: took 4.867943668s for fixHost
	I1213 16:12:18.969308 1542350 start.go:83] releasing machines lock for "newest-cni-526531", held for 4.867999692s
	I1213 16:12:18.969378 1542350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:12:18.986065 1542350 ssh_runner.go:195] Run: cat /version.json
	I1213 16:12:18.986168 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:18.986433 1542350 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 16:12:18.986485 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:19.008809 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:19.015681 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:19.197190 1542350 ssh_runner.go:195] Run: systemctl --version
	I1213 16:12:19.203734 1542350 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 16:12:19.208293 1542350 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 16:12:19.208365 1542350 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 16:12:19.216699 1542350 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 16:12:19.216724 1542350 start.go:496] detecting cgroup driver to use...
	I1213 16:12:19.216769 1542350 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 16:12:19.216822 1542350 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 16:12:19.235051 1542350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 16:12:19.248627 1542350 docker.go:218] disabling cri-docker service (if available) ...
	I1213 16:12:19.248695 1542350 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 16:12:19.264536 1542350 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 16:12:19.278273 1542350 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 16:12:19.415282 1542350 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 16:12:19.542944 1542350 docker.go:234] disabling docker service ...
	I1213 16:12:19.543049 1542350 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 16:12:19.558893 1542350 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 16:12:19.572698 1542350 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 16:12:19.700893 1542350 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 16:12:19.830331 1542350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 16:12:19.843617 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 16:12:19.858193 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 16:12:19.867834 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 16:12:19.877291 1542350 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 16:12:19.877362 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 16:12:19.886078 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:12:19.894812 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 16:12:19.903917 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:12:19.912720 1542350 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 16:12:19.921167 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 16:12:19.930798 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 16:12:19.940230 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 16:12:19.950040 1542350 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 16:12:19.958360 1542350 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 16:12:19.966286 1542350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:12:20.089676 1542350 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 16:12:20.224467 1542350 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 16:12:20.224608 1542350 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 16:12:20.228661 1542350 start.go:564] Will wait 60s for crictl version
	I1213 16:12:20.228772 1542350 ssh_runner.go:195] Run: which crictl
	I1213 16:12:20.232454 1542350 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 16:12:20.257719 1542350 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 16:12:20.257840 1542350 ssh_runner.go:195] Run: containerd --version
	I1213 16:12:20.279500 1542350 ssh_runner.go:195] Run: containerd --version
	I1213 16:12:20.302783 1542350 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 16:12:20.305579 1542350 cli_runner.go:164] Run: docker network inspect newest-cni-526531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 16:12:20.322844 1542350 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 16:12:20.326903 1542350 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:12:20.339926 1542350 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 16:12:20.342782 1542350 kubeadm.go:884] updating cluster {Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 16:12:20.342928 1542350 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:12:20.343016 1542350 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:12:20.367771 1542350 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:12:20.367795 1542350 containerd.go:534] Images already preloaded, skipping extraction
	I1213 16:12:20.367857 1542350 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:12:20.393096 1542350 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:12:20.393118 1542350 cache_images.go:86] Images are preloaded, skipping loading
	I1213 16:12:20.393126 1542350 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 16:12:20.393232 1542350 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-526531 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 16:12:20.393305 1542350 ssh_runner.go:195] Run: sudo crictl info
	I1213 16:12:20.418251 1542350 cni.go:84] Creating CNI manager for ""
	I1213 16:12:20.418277 1542350 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:12:20.418295 1542350 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 16:12:20.418318 1542350 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-526531 NodeName:newest-cni-526531 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 16:12:20.418435 1542350 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-526531"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 16:12:20.418510 1542350 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 16:12:20.426561 1542350 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 16:12:20.426663 1542350 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 16:12:20.434234 1542350 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 16:12:20.447269 1542350 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 16:12:20.459764 1542350 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 16:12:20.473147 1542350 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 16:12:20.476975 1542350 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:12:20.486881 1542350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:12:20.634044 1542350 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:12:20.650082 1542350 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531 for IP: 192.168.76.2
	I1213 16:12:20.650107 1542350 certs.go:195] generating shared ca certs ...
	I1213 16:12:20.650125 1542350 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:12:20.650260 1542350 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 16:12:20.650315 1542350 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 16:12:20.650327 1542350 certs.go:257] generating profile certs ...
	I1213 16:12:20.650431 1542350 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.key
	I1213 16:12:20.650494 1542350 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key.b007e6c7
	I1213 16:12:20.650541 1542350 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key
	I1213 16:12:20.650652 1542350 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 16:12:20.650691 1542350 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 16:12:20.650704 1542350 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 16:12:20.650731 1542350 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 16:12:20.650764 1542350 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 16:12:20.650791 1542350 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 16:12:20.650844 1542350 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:12:20.651682 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 16:12:20.679737 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 16:12:20.697714 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 16:12:20.716102 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 16:12:20.734754 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 16:12:20.752380 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 16:12:20.770335 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 16:12:20.787592 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1213 16:12:20.805866 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 16:12:20.823616 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 16:12:20.845606 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 16:12:20.863659 1542350 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 16:12:20.877321 1542350 ssh_runner.go:195] Run: openssl version
	I1213 16:12:20.884096 1542350 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 16:12:20.891462 1542350 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 16:12:20.900719 1542350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 16:12:20.905878 1542350 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 16:12:20.905990 1542350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 16:12:20.952615 1542350 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 16:12:20.960412 1542350 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:12:20.967994 1542350 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 16:12:20.975909 1542350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:12:20.979941 1542350 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:12:20.980042 1542350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:12:21.021453 1542350 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 16:12:21.029467 1542350 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 16:12:21.037114 1542350 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 16:12:21.045054 1542350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 16:12:21.049353 1542350 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 16:12:21.049420 1542350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 16:12:21.090431 1542350 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 16:12:21.097998 1542350 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 16:12:21.101759 1542350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 16:12:21.142651 1542350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 16:12:21.183449 1542350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 16:12:21.224713 1542350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 16:12:21.267101 1542350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 16:12:21.308542 1542350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 16:12:21.350324 1542350 kubeadm.go:401] StartCluster: {Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:12:21.350489 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 16:12:21.350594 1542350 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 16:12:21.381089 1542350 cri.go:89] found id: ""
	I1213 16:12:21.381225 1542350 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 16:12:21.391210 1542350 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 16:12:21.391281 1542350 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 16:12:21.391387 1542350 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 16:12:21.399153 1542350 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 16:12:21.399882 1542350 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-526531" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:12:21.400209 1542350 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-1251074/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-526531" cluster setting kubeconfig missing "newest-cni-526531" context setting]
	I1213 16:12:21.400761 1542350 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:12:21.402579 1542350 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 16:12:21.410218 1542350 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1213 16:12:21.410252 1542350 kubeadm.go:602] duration metric: took 18.943347ms to restartPrimaryControlPlane
	I1213 16:12:21.410262 1542350 kubeadm.go:403] duration metric: took 59.957451ms to StartCluster
	I1213 16:12:21.410276 1542350 settings.go:142] acquiring lock: {Name:mk3e38eb9635dd950ddf8081cc0598a8c10049c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:12:21.410337 1542350 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:12:21.411206 1542350 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:12:21.411496 1542350 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 16:12:21.411842 1542350 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 16:12:21.411918 1542350 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-526531"
	I1213 16:12:21.411932 1542350 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-526531"
	I1213 16:12:21.411959 1542350 host.go:66] Checking if "newest-cni-526531" exists ...
	I1213 16:12:21.412409 1542350 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:12:21.412632 1542350 config.go:182] Loaded profile config "newest-cni-526531": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:12:21.412699 1542350 addons.go:70] Setting dashboard=true in profile "newest-cni-526531"
	I1213 16:12:21.412715 1542350 addons.go:239] Setting addon dashboard=true in "newest-cni-526531"
	W1213 16:12:21.412722 1542350 addons.go:248] addon dashboard should already be in state true
	I1213 16:12:21.412753 1542350 host.go:66] Checking if "newest-cni-526531" exists ...
	I1213 16:12:21.413150 1542350 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:12:21.417035 1542350 addons.go:70] Setting default-storageclass=true in profile "newest-cni-526531"
	I1213 16:12:21.417076 1542350 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-526531"
	I1213 16:12:21.417425 1542350 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:12:21.417785 1542350 out.go:179] * Verifying Kubernetes components...
	I1213 16:12:21.420756 1542350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:12:21.445354 1542350 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 16:12:21.448121 1542350 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:12:21.448150 1542350 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 16:12:21.448220 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:21.451677 1542350 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 16:12:21.454559 1542350 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 16:12:21.457364 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 16:12:21.457390 1542350 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 16:12:21.457468 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:21.461079 1542350 addons.go:239] Setting addon default-storageclass=true in "newest-cni-526531"
	I1213 16:12:21.461127 1542350 host.go:66] Checking if "newest-cni-526531" exists ...
	I1213 16:12:21.461533 1542350 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:12:21.475798 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:21.512911 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:21.534060 1542350 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 16:12:21.534082 1542350 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 16:12:21.534143 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:21.567579 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:21.655778 1542350 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:12:21.660712 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:12:21.695006 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 16:12:21.695031 1542350 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 16:12:21.711844 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 16:12:21.711868 1542350 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 16:12:21.726264 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 16:12:21.726287 1542350 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 16:12:21.742159 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 16:12:21.742183 1542350 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 16:12:21.759213 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 16:12:21.759234 1542350 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 16:12:21.769713 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 16:12:21.791192 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 16:12:21.791260 1542350 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 16:12:21.814992 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 16:12:21.815063 1542350 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 16:12:21.830895 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 16:12:21.830972 1542350 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 16:12:21.849742 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:12:21.849815 1542350 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 16:12:21.864289 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:12:22.085788 1542350 api_server.go:52] waiting for apiserver process to appear ...
	I1213 16:12:22.085922 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 16:12:22.086102 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.086159 1542350 retry.go:31] will retry after 179.056392ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:12:22.086246 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.086353 1542350 retry.go:31] will retry after 181.278424ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:12:22.086609 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.086645 1542350 retry.go:31] will retry after 135.21458ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.222538 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:12:22.266024 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:12:22.268540 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:22.304395 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.304479 1542350 retry.go:31] will retry after 553.734459ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:12:22.383592 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.383626 1542350 retry.go:31] will retry after 310.627988ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:12:22.384428 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.384454 1542350 retry.go:31] will retry after 477.647599ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.586862 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:22.695343 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:22.754692 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.754771 1542350 retry.go:31] will retry after 349.01084ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.858966 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:12:22.862536 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:22.953516 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.953561 1542350 retry.go:31] will retry after 343.489775ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:12:22.953788 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.953849 1542350 retry.go:31] will retry after 703.913124ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.086088 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:23.104680 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:23.181935 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.181974 1542350 retry.go:31] will retry after 792.501261ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.297213 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:23.357629 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.357664 1542350 retry.go:31] will retry after 710.733017ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.586938 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:23.658890 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:12:23.729079 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.729127 1542350 retry.go:31] will retry after 642.679357ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.975021 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:24.036696 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.036729 1542350 retry.go:31] will retry after 1.762152539s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.068939 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 16:12:24.086560 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 16:12:24.136068 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.136100 1542350 retry.go:31] will retry after 670.883469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.372395 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:12:24.444952 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.444996 1542350 retry.go:31] will retry after 1.594344916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.586388 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:24.807252 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:24.873210 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.873241 1542350 retry.go:31] will retry after 1.504699438s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:25.086635 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:25.586697 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:25.799081 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:25.864095 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:25.864173 1542350 retry.go:31] will retry after 2.833515163s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:26.040555 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:12:26.086244 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 16:12:26.134589 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:26.134626 1542350 retry.go:31] will retry after 2.268954348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:26.378204 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:26.437143 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:26.437179 1542350 retry.go:31] will retry after 2.009206759s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:26.586404 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:27.086045 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:27.586074 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:28.086070 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:28.404537 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:12:28.446967 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:28.469203 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:28.469234 1542350 retry.go:31] will retry after 1.799417627s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:12:28.516574 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:28.516611 1542350 retry.go:31] will retry after 2.723803306s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:28.586847 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:28.698086 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:28.762693 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:28.762729 1542350 retry.go:31] will retry after 1.577559772s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:29.086307 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:29.586102 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:30.086078 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:30.269847 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:12:30.336710 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:30.336749 1542350 retry.go:31] will retry after 2.535864228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:30.341075 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:30.419871 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:30.419902 1542350 retry.go:31] will retry after 2.188608586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:30.586056 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:31.086792 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:31.241343 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:31.303140 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:31.303175 1542350 retry.go:31] will retry after 4.008884548s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:31.586821 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:32.086175 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:32.587018 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:32.608868 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:32.689818 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:32.689856 1542350 retry.go:31] will retry after 5.074576061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:32.873213 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:12:32.940949 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:32.940984 1542350 retry.go:31] will retry after 7.456449925s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:33.086429 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:33.586022 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:34.086094 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:34.585998 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:35.086896 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:35.312254 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:35.377660 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:35.377698 1542350 retry.go:31] will retry after 9.192453055s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:35.587034 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:36.086843 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:36.586051 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:37.086838 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:37.586771 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:37.765048 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:37.824278 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:37.824312 1542350 retry.go:31] will retry after 11.772995815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:38.086864 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:38.586073 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:39.086969 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:39.586055 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:40.086122 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:40.398539 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:12:40.468470 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:40.468513 1542350 retry.go:31] will retry after 13.248485713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:40.586656 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:41.086065 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:41.586366 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:42.086189 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:42.586086 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:43.086089 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:43.586027 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:44.086574 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:44.570741 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 16:12:44.586247 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 16:12:44.654442 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:44.654477 1542350 retry.go:31] will retry after 14.969470504s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:45.086353 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:45.586835 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:46.086082 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:46.586716 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:47.086075 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:47.586621 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:48.086124 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:48.586928 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:49.087028 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:49.586115 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:49.597980 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:49.660643 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:49.660672 1542350 retry.go:31] will retry after 11.077380605s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:50.086194 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:50.586148 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:51.086673 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:51.586443 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:52.086098 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:52.586095 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:53.086117 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:53.586714 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:53.717290 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:12:53.777883 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:53.777918 1542350 retry.go:31] will retry after 17.242726639s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:54.086154 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:54.586837 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:55.086738 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:55.586843 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:56.086112 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:56.586065 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:57.087033 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:57.587026 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:58.086821 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:58.586066 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:59.086344 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:59.586987 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:59.624396 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:59.692077 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:59.692113 1542350 retry.go:31] will retry after 25.118824905s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:00.086703 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:00.586076 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:00.738326 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:13:00.797829 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:00.797860 1542350 retry.go:31] will retry after 28.273971977s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:01.086109 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:01.586093 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:02.086800 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:02.586059 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:03.086118 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:03.586099 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:04.086574 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:04.586119 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:05.087001 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:05.586735 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:06.087021 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:06.586098 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:07.086059 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:07.586074 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:08.086071 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:08.586627 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:09.086132 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:09.586339 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:10.086956 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:10.586102 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:11.020938 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:13:11.086782 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 16:13:11.098002 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:11.098037 1542350 retry.go:31] will retry after 28.022573365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:11.586801 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:12.086121 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:12.586779 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:13.086780 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:13.586110 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:14.086075 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:14.586725 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:15.086688 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:15.587040 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:16.086588 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:16.586972 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:17.086881 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:17.586014 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:18.086609 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:18.586065 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:19.086985 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:19.586109 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:20.086095 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:20.586709 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:21.086130 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:21.586680 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:21.586792 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:21.614864 1542350 cri.go:89] found id: ""
	I1213 16:13:21.614885 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.614894 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:21.614901 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:21.614963 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:21.646495 1542350 cri.go:89] found id: ""
	I1213 16:13:21.646517 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.646525 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:21.646532 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:21.646592 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:21.676251 1542350 cri.go:89] found id: ""
	I1213 16:13:21.676274 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.676283 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:21.676289 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:21.676358 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:21.706048 1542350 cri.go:89] found id: ""
	I1213 16:13:21.706075 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.706084 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:21.706093 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:21.706167 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:21.733595 1542350 cri.go:89] found id: ""
	I1213 16:13:21.733620 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.733628 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:21.733634 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:21.733694 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:21.758418 1542350 cri.go:89] found id: ""
	I1213 16:13:21.758444 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.758453 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:21.758459 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:21.758520 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:21.782936 1542350 cri.go:89] found id: ""
	I1213 16:13:21.782962 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.782970 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:21.782976 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:21.783038 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:21.807262 1542350 cri.go:89] found id: ""
	I1213 16:13:21.807289 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.807298 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:21.807327 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:21.807340 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:21.862632 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:21.862670 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:21.879878 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:21.879905 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:21.954675 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:21.946473    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.946958    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.948653    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.949095    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.950567    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:21.946473    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.946958    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.948653    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.949095    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.950567    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:21.954699 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:21.954712 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:21.980443 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:21.980489 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:24.514188 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:24.524708 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:24.524788 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:24.549819 1542350 cri.go:89] found id: ""
	I1213 16:13:24.549840 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.549848 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:24.549866 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:24.549925 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:24.574754 1542350 cri.go:89] found id: ""
	I1213 16:13:24.574781 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.574790 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:24.574795 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:24.574857 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:24.606443 1542350 cri.go:89] found id: ""
	I1213 16:13:24.606465 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.606474 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:24.606481 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:24.606542 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:24.638639 1542350 cri.go:89] found id: ""
	I1213 16:13:24.638660 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.638668 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:24.638674 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:24.638733 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:24.671023 1542350 cri.go:89] found id: ""
	I1213 16:13:24.671046 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.671055 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:24.671063 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:24.671137 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:24.697378 1542350 cri.go:89] found id: ""
	I1213 16:13:24.697405 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.697414 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:24.697420 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:24.697497 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:24.722594 1542350 cri.go:89] found id: ""
	I1213 16:13:24.722621 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.722631 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:24.722637 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:24.722728 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:24.746821 1542350 cri.go:89] found id: ""
	I1213 16:13:24.746850 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.746860 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:24.746878 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:24.746891 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:24.763249 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:24.763286 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 16:13:24.811678 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:13:24.851435 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:24.840548    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.841317    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.843094    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.843728    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.845484    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:24.840548    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.841317    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.843094    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.843728    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.845484    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:24.851500 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:24.851539 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1213 16:13:24.879668 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:24.879746 1542350 retry.go:31] will retry after 33.423455906s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:24.890839 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:24.890870 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:24.920848 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:24.920877 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:27.476632 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:27.488585 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:27.488659 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:27.518011 1542350 cri.go:89] found id: ""
	I1213 16:13:27.518034 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.518042 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:27.518049 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:27.518110 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:27.543732 1542350 cri.go:89] found id: ""
	I1213 16:13:27.543759 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.543771 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:27.543777 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:27.543862 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:27.568999 1542350 cri.go:89] found id: ""
	I1213 16:13:27.569025 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.569033 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:27.569039 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:27.569097 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:27.607884 1542350 cri.go:89] found id: ""
	I1213 16:13:27.607913 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.607921 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:27.607928 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:27.607987 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:27.644349 1542350 cri.go:89] found id: ""
	I1213 16:13:27.644376 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.644384 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:27.644390 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:27.644461 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:27.676832 1542350 cri.go:89] found id: ""
	I1213 16:13:27.676860 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.676870 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:27.676875 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:27.676934 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:27.702113 1542350 cri.go:89] found id: ""
	I1213 16:13:27.702142 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.702151 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:27.702157 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:27.702219 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:27.727737 1542350 cri.go:89] found id: ""
	I1213 16:13:27.727763 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.727772 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:27.727782 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:27.727795 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:27.782283 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:27.782317 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:27.800167 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:27.800195 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:27.871267 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:27.862487    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.863167    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.864974    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.865561    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.867199    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:27.862487    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.863167    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.864974    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.865561    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.867199    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:27.871378 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:27.871398 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:27.896932 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:27.896972 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:29.072145 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:13:29.152200 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:29.152237 1542350 retry.go:31] will retry after 45.772066333s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:30.424283 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:30.435064 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:30.435141 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:30.458954 1542350 cri.go:89] found id: ""
	I1213 16:13:30.458977 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.458985 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:30.458991 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:30.459050 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:30.482988 1542350 cri.go:89] found id: ""
	I1213 16:13:30.483016 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.483025 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:30.483031 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:30.483089 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:30.508669 1542350 cri.go:89] found id: ""
	I1213 16:13:30.508695 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.508704 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:30.508710 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:30.508797 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:30.532450 1542350 cri.go:89] found id: ""
	I1213 16:13:30.532543 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.532561 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:30.532569 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:30.532643 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:30.561998 1542350 cri.go:89] found id: ""
	I1213 16:13:30.562026 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.562035 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:30.562041 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:30.562132 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:30.600654 1542350 cri.go:89] found id: ""
	I1213 16:13:30.600688 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.600703 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:30.600711 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:30.600824 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:30.628653 1542350 cri.go:89] found id: ""
	I1213 16:13:30.628724 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.628758 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:30.628798 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:30.628886 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:30.659930 1542350 cri.go:89] found id: ""
	I1213 16:13:30.660009 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.660032 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:30.660049 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:30.660076 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:30.717289 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:30.717327 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:30.733637 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:30.733668 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:30.804923 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:30.797049    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.797717    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.799180    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.799496    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.800891    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:30.797049    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.797717    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.799180    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.799496    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.800891    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:30.804949 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:30.804966 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:30.830439 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:30.830482 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:33.359431 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:33.370707 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:33.370778 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:33.404091 1542350 cri.go:89] found id: ""
	I1213 16:13:33.404114 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.404135 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:33.404141 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:33.404200 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:33.432896 1542350 cri.go:89] found id: ""
	I1213 16:13:33.432922 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.432931 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:33.432937 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:33.433006 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:33.457244 1542350 cri.go:89] found id: ""
	I1213 16:13:33.457271 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.457280 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:33.457285 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:33.457343 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:33.482368 1542350 cri.go:89] found id: ""
	I1213 16:13:33.482389 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.482397 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:33.482403 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:33.482463 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:33.506253 1542350 cri.go:89] found id: ""
	I1213 16:13:33.506276 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.506284 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:33.506290 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:33.506350 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:33.532337 1542350 cri.go:89] found id: ""
	I1213 16:13:33.532362 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.532371 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:33.532377 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:33.532435 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:33.557859 1542350 cri.go:89] found id: ""
	I1213 16:13:33.557887 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.557896 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:33.557902 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:33.557961 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:33.585180 1542350 cri.go:89] found id: ""
	I1213 16:13:33.585208 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.585216 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:33.585226 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:33.585249 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:33.626301 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:33.626332 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:33.693048 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:33.693086 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:33.709482 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:33.709550 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:33.779437 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:33.771529    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.771977    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.773496    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.773820    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.775408    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:33.771529    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.771977    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.773496    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.773820    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.775408    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:33.779461 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:33.779476 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:36.314080 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:36.324714 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:36.324793 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:36.352949 1542350 cri.go:89] found id: ""
	I1213 16:13:36.353025 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.353048 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:36.353066 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:36.353159 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:36.384496 1542350 cri.go:89] found id: ""
	I1213 16:13:36.384563 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.384586 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:36.384603 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:36.384690 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:36.418779 1542350 cri.go:89] found id: ""
	I1213 16:13:36.418842 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.418866 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:36.418884 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:36.418968 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:36.448378 1542350 cri.go:89] found id: ""
	I1213 16:13:36.448420 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.448429 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:36.448445 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:36.448524 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:36.473284 1542350 cri.go:89] found id: ""
	I1213 16:13:36.473361 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.473376 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:36.473383 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:36.473454 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:36.500619 1542350 cri.go:89] found id: ""
	I1213 16:13:36.500642 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.500651 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:36.500663 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:36.500724 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:36.529444 1542350 cri.go:89] found id: ""
	I1213 16:13:36.529517 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.529532 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:36.529539 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:36.529609 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:36.553861 1542350 cri.go:89] found id: ""
	I1213 16:13:36.553886 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.553894 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:36.553904 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:36.553915 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:36.610671 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:36.610704 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:36.628462 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:36.628544 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:36.705883 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:36.697914    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.698707    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.700211    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.700636    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.702077    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:36.697914    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.698707    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.700211    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.700636    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.702077    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:36.705906 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:36.705918 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:36.730607 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:36.730646 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:39.121733 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:13:39.184741 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:39.184777 1542350 retry.go:31] will retry after 19.299456104s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:39.259892 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:39.271332 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:39.271403 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:39.300612 1542350 cri.go:89] found id: ""
	I1213 16:13:39.300637 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.300646 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:39.300652 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:39.300712 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:39.324641 1542350 cri.go:89] found id: ""
	I1213 16:13:39.324666 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.324675 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:39.324680 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:39.324739 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:39.356074 1542350 cri.go:89] found id: ""
	I1213 16:13:39.356099 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.356108 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:39.356114 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:39.356178 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:39.383742 1542350 cri.go:89] found id: ""
	I1213 16:13:39.383766 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.383775 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:39.383781 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:39.383846 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:39.411271 1542350 cri.go:89] found id: ""
	I1213 16:13:39.411297 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.411305 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:39.411334 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:39.411395 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:39.437295 1542350 cri.go:89] found id: ""
	I1213 16:13:39.437321 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.437329 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:39.437336 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:39.437419 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:39.462328 1542350 cri.go:89] found id: ""
	I1213 16:13:39.462352 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.462361 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:39.462368 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:39.462445 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:39.486926 1542350 cri.go:89] found id: ""
	I1213 16:13:39.486951 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.486961 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:39.486970 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:39.486986 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:39.545864 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:39.545902 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:39.561750 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:39.561780 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:39.648853 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:39.635798    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.636607    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.637647    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.638390    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.642907    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:39.635798    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.636607    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.637647    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.638390    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.642907    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:39.648878 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:39.648893 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:39.674238 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:39.674280 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:42.203005 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:42.217190 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:42.217290 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:42.248179 1542350 cri.go:89] found id: ""
	I1213 16:13:42.248214 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.248224 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:42.248231 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:42.248315 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:42.281373 1542350 cri.go:89] found id: ""
	I1213 16:13:42.281400 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.281409 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:42.281416 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:42.281481 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:42.313298 1542350 cri.go:89] found id: ""
	I1213 16:13:42.313327 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.313343 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:42.313351 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:42.313419 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:42.347164 1542350 cri.go:89] found id: ""
	I1213 16:13:42.347256 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.347274 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:42.347282 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:42.347421 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:42.377063 1542350 cri.go:89] found id: ""
	I1213 16:13:42.377097 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.377105 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:42.377112 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:42.377195 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:42.404395 1542350 cri.go:89] found id: ""
	I1213 16:13:42.404430 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.404439 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:42.404446 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:42.404522 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:42.429038 1542350 cri.go:89] found id: ""
	I1213 16:13:42.429112 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.429128 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:42.429135 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:42.429202 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:42.453891 1542350 cri.go:89] found id: ""
	I1213 16:13:42.453935 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.453944 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:42.453954 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:42.453970 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:42.509865 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:42.509901 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:42.525994 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:42.526022 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:42.601177 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:42.590564    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.592469    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.593053    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.594708    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.595182    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:42.590564    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.592469    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.593053    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.594708    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.595182    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:42.601257 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:42.601292 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:42.630417 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:42.630495 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:45.167780 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:45.186685 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:45.186786 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:45.266905 1542350 cri.go:89] found id: ""
	I1213 16:13:45.266931 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.266941 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:45.266948 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:45.267020 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:45.302244 1542350 cri.go:89] found id: ""
	I1213 16:13:45.302273 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.302283 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:45.302289 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:45.302368 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:45.330669 1542350 cri.go:89] found id: ""
	I1213 16:13:45.330697 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.330707 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:45.330713 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:45.330777 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:45.368642 1542350 cri.go:89] found id: ""
	I1213 16:13:45.368677 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.368685 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:45.368692 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:45.368753 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:45.407608 1542350 cri.go:89] found id: ""
	I1213 16:13:45.407631 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.407639 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:45.407645 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:45.407706 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:45.438077 1542350 cri.go:89] found id: ""
	I1213 16:13:45.438104 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.438112 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:45.438119 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:45.438178 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:45.467617 1542350 cri.go:89] found id: ""
	I1213 16:13:45.467645 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.467654 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:45.467660 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:45.467725 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:45.496715 1542350 cri.go:89] found id: ""
	I1213 16:13:45.496741 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.496750 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:45.496760 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:45.496771 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:45.522438 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:45.522475 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:45.554662 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:45.554691 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:45.614193 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:45.614275 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:45.631794 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:45.631875 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:45.701179 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:45.692225    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.692953    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.694656    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.695386    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.697149    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:45.692225    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.692953    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.694656    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.695386    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.697149    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:48.201848 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:48.212860 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:48.212934 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:48.241802 1542350 cri.go:89] found id: ""
	I1213 16:13:48.241830 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.241838 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:48.241845 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:48.241908 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:48.270100 1542350 cri.go:89] found id: ""
	I1213 16:13:48.270128 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.270137 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:48.270143 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:48.270207 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:48.295048 1542350 cri.go:89] found id: ""
	I1213 16:13:48.295073 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.295081 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:48.295087 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:48.295150 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:48.320949 1542350 cri.go:89] found id: ""
	I1213 16:13:48.320974 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.320983 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:48.320989 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:48.321048 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:48.357548 1542350 cri.go:89] found id: ""
	I1213 16:13:48.357572 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.357580 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:48.357586 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:48.357646 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:48.395642 1542350 cri.go:89] found id: ""
	I1213 16:13:48.395676 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.395685 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:48.395692 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:48.395761 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:48.426584 1542350 cri.go:89] found id: ""
	I1213 16:13:48.426611 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.426620 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:48.426626 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:48.426687 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:48.451854 1542350 cri.go:89] found id: ""
	I1213 16:13:48.451890 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.451899 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:48.451923 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:48.451938 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:48.508044 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:48.508086 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:48.523941 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:48.523971 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:48.594870 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:48.579201    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.579927    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.583793    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.584114    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.585609    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:48.579201    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.579927    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.583793    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.584114    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.585609    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:48.594893 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:48.594906 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:48.621999 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:48.622078 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:51.156024 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:51.167178 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:51.167252 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:51.198661 1542350 cri.go:89] found id: ""
	I1213 16:13:51.198684 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.198692 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:51.198699 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:51.198757 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:51.224046 1542350 cri.go:89] found id: ""
	I1213 16:13:51.224069 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.224077 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:51.224083 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:51.224149 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:51.253035 1542350 cri.go:89] found id: ""
	I1213 16:13:51.253062 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.253070 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:51.253076 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:51.253164 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:51.278917 1542350 cri.go:89] found id: ""
	I1213 16:13:51.278943 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.278952 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:51.278958 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:51.279016 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:51.305382 1542350 cri.go:89] found id: ""
	I1213 16:13:51.305405 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.305413 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:51.305419 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:51.305480 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:51.329703 1542350 cri.go:89] found id: ""
	I1213 16:13:51.329726 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.329735 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:51.329741 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:51.329800 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:51.359740 1542350 cri.go:89] found id: ""
	I1213 16:13:51.359762 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.359770 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:51.359776 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:51.359840 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:51.386446 1542350 cri.go:89] found id: ""
	I1213 16:13:51.386522 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.386544 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:51.386566 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:51.386589 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:51.412669 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:51.412707 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:51.453745 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:51.453775 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:51.511660 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:51.511698 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:51.527994 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:51.528025 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:51.595021 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:51.583262    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.584038    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.585894    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.586556    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.588263    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:51.583262    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.584038    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.585894    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.586556    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.588263    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:54.096158 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:54.107425 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:54.107512 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:54.138865 1542350 cri.go:89] found id: ""
	I1213 16:13:54.138891 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.138899 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:54.138905 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:54.138966 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:54.164096 1542350 cri.go:89] found id: ""
	I1213 16:13:54.164121 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.164130 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:54.164135 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:54.164195 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:54.193309 1542350 cri.go:89] found id: ""
	I1213 16:13:54.193335 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.193345 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:54.193352 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:54.193416 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:54.219468 1542350 cri.go:89] found id: ""
	I1213 16:13:54.219490 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.219499 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:54.219520 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:54.219589 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:54.244935 1542350 cri.go:89] found id: ""
	I1213 16:13:54.244962 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.244971 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:54.244977 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:54.245038 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:54.274445 1542350 cri.go:89] found id: ""
	I1213 16:13:54.274472 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.274481 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:54.274488 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:54.274554 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:54.304121 1542350 cri.go:89] found id: ""
	I1213 16:13:54.304146 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.304154 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:54.304160 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:54.304217 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:54.329301 1542350 cri.go:89] found id: ""
	I1213 16:13:54.329327 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.329335 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:54.329350 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:54.329362 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:54.357962 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:54.358003 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:54.393726 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:54.393753 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:54.454879 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:54.454917 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:54.471046 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:54.471122 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:54.539675 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:54.530749    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.531242    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.532726    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.533202    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.535706    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:54.530749    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.531242    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.532726    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.533202    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.535706    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:57.040543 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:57.051825 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:57.051902 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:57.080948 1542350 cri.go:89] found id: ""
	I1213 16:13:57.080975 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.080984 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:57.080990 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:57.081060 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:57.106564 1542350 cri.go:89] found id: ""
	I1213 16:13:57.106592 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.106602 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:57.106609 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:57.106674 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:57.132305 1542350 cri.go:89] found id: ""
	I1213 16:13:57.132332 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.132341 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:57.132347 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:57.132415 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:57.161893 1542350 cri.go:89] found id: ""
	I1213 16:13:57.161919 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.161928 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:57.161934 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:57.161996 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:57.187018 1542350 cri.go:89] found id: ""
	I1213 16:13:57.187042 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.187051 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:57.187057 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:57.187118 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:57.213450 1542350 cri.go:89] found id: ""
	I1213 16:13:57.213477 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.213486 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:57.213493 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:57.213598 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:57.239773 1542350 cri.go:89] found id: ""
	I1213 16:13:57.239799 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.239808 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:57.239814 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:57.239875 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:57.268874 1542350 cri.go:89] found id: ""
	I1213 16:13:57.268901 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.268910 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:57.268920 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:57.268932 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:57.325438 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:57.325478 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:57.345255 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:57.345288 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:57.419796 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:57.411216    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.412003    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.413617    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.414124    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.415814    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:57.411216    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.412003    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.413617    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.414124    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.415814    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:57.419818 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:57.419830 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:57.445711 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:57.445753 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:58.303454 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:13:58.370450 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:13:58.370563 1542350 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 16:13:58.485061 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:13:58.547882 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:13:58.547990 1542350 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 16:13:59.973778 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:59.984749 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:59.984822 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:00.047691 1542350 cri.go:89] found id: ""
	I1213 16:14:00.047719 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.047729 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:00.047735 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:00.047812 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:00.172004 1542350 cri.go:89] found id: ""
	I1213 16:14:00.172032 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.172042 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:00.172048 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:00.172124 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:00.225264 1542350 cri.go:89] found id: ""
	I1213 16:14:00.225417 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.225430 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:00.225441 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:00.225515 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:00.291798 1542350 cri.go:89] found id: ""
	I1213 16:14:00.291826 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.291837 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:00.291843 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:00.291915 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:00.322720 1542350 cri.go:89] found id: ""
	I1213 16:14:00.322775 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.322785 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:00.322802 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:00.322965 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:00.382229 1542350 cri.go:89] found id: ""
	I1213 16:14:00.382259 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.382268 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:00.382276 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:00.382353 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:00.428076 1542350 cri.go:89] found id: ""
	I1213 16:14:00.428104 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.428114 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:00.428122 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:00.428188 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:00.456283 1542350 cri.go:89] found id: ""
	I1213 16:14:00.456313 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.456322 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:00.456334 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:00.456347 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:00.487074 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:00.487103 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:00.543060 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:00.543096 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:00.559570 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:00.559599 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:00.643362 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:00.632447    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.633406    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.637299    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.637614    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.639111    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:00.632447    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.633406    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.637299    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.637614    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.639111    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:00.643385 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:00.643398 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:03.169712 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:03.180422 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:03.180498 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:03.204986 1542350 cri.go:89] found id: ""
	I1213 16:14:03.205052 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.205078 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:03.205091 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:03.205167 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:03.229548 1542350 cri.go:89] found id: ""
	I1213 16:14:03.229624 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.229648 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:03.229667 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:03.229759 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:03.255379 1542350 cri.go:89] found id: ""
	I1213 16:14:03.255401 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.255410 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:03.255415 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:03.255474 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:03.281492 1542350 cri.go:89] found id: ""
	I1213 16:14:03.281516 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.281526 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:03.281532 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:03.281594 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:03.309687 1542350 cri.go:89] found id: ""
	I1213 16:14:03.309709 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.309717 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:03.309723 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:03.309781 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:03.342064 1542350 cri.go:89] found id: ""
	I1213 16:14:03.342088 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.342097 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:03.342104 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:03.342166 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:03.374355 1542350 cri.go:89] found id: ""
	I1213 16:14:03.374427 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.374449 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:03.374468 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:03.374551 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:03.402300 1542350 cri.go:89] found id: ""
	I1213 16:14:03.402373 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.402397 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:03.402419 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:03.402454 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:03.419291 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:03.419341 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:03.488415 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:03.479265    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.480042    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.481961    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.482530    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.483778    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:03.479265    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.480042    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.481961    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.482530    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.483778    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:03.488438 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:03.488450 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:03.513548 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:03.513583 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:03.541410 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:03.541438 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:06.098537 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:06.109444 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:06.109517 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:06.135738 1542350 cri.go:89] found id: ""
	I1213 16:14:06.135763 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.135772 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:06.135778 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:06.135838 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:06.164881 1542350 cri.go:89] found id: ""
	I1213 16:14:06.164907 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.164915 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:06.164921 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:06.165006 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:06.190132 1542350 cri.go:89] found id: ""
	I1213 16:14:06.190157 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.190166 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:06.190172 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:06.190237 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:06.214554 1542350 cri.go:89] found id: ""
	I1213 16:14:06.214588 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.214603 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:06.214610 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:06.214678 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:06.239546 1542350 cri.go:89] found id: ""
	I1213 16:14:06.239573 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.239582 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:06.239588 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:06.239675 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:06.265195 1542350 cri.go:89] found id: ""
	I1213 16:14:06.265223 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.265231 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:06.265237 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:06.265308 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:06.289926 1542350 cri.go:89] found id: ""
	I1213 16:14:06.289960 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.289969 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:06.289991 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:06.290071 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:06.314603 1542350 cri.go:89] found id: ""
	I1213 16:14:06.314629 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.314637 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:06.314647 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:06.314683 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:06.371177 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:06.371258 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:06.393856 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:06.393930 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:06.459001 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:06.450168    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.450823    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.452646    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.453140    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.454779    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:06.450168    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.450823    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.452646    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.453140    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.454779    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:06.459025 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:06.459038 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:06.484151 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:06.484188 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:09.017168 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:09.028196 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:09.028273 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:09.056958 1542350 cri.go:89] found id: ""
	I1213 16:14:09.056983 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.056991 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:09.056997 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:09.057056 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:09.081528 1542350 cri.go:89] found id: ""
	I1213 16:14:09.081554 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.081562 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:09.081568 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:09.081625 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:09.106979 1542350 cri.go:89] found id: ""
	I1213 16:14:09.107006 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.107015 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:09.107022 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:09.107082 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:09.131992 1542350 cri.go:89] found id: ""
	I1213 16:14:09.132014 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.132022 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:09.132031 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:09.132090 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:09.159379 1542350 cri.go:89] found id: ""
	I1213 16:14:09.159403 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.159411 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:09.159417 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:09.159475 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:09.188125 1542350 cri.go:89] found id: ""
	I1213 16:14:09.188148 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.188157 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:09.188163 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:09.188223 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:09.213724 1542350 cri.go:89] found id: ""
	I1213 16:14:09.213746 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.213755 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:09.213762 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:09.213820 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:09.239228 1542350 cri.go:89] found id: ""
	I1213 16:14:09.239250 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.239258 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:09.239269 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:09.239280 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:09.264873 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:09.264908 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:09.297705 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:09.297733 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:09.356080 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:09.356130 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:09.376099 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:09.376130 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:09.447156 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:09.438767    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.439139    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.440687    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.441254    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.442932    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:09.438767    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.439139    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.440687    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.441254    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.442932    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:11.948214 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:11.961565 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:11.961686 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:11.989927 1542350 cri.go:89] found id: ""
	I1213 16:14:11.989978 1542350 logs.go:282] 0 containers: []
	W1213 16:14:11.989988 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:11.989997 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:11.990074 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:12.015827 1542350 cri.go:89] found id: ""
	I1213 16:14:12.015853 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.015863 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:12.015869 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:12.015931 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:12.043024 1542350 cri.go:89] found id: ""
	I1213 16:14:12.043052 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.043061 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:12.043067 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:12.043129 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:12.068348 1542350 cri.go:89] found id: ""
	I1213 16:14:12.068376 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.068385 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:12.068390 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:12.068450 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:12.097740 1542350 cri.go:89] found id: ""
	I1213 16:14:12.097774 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.097783 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:12.097790 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:12.097858 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:12.121723 1542350 cri.go:89] found id: ""
	I1213 16:14:12.121755 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.121764 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:12.121770 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:12.121842 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:12.150786 1542350 cri.go:89] found id: ""
	I1213 16:14:12.150813 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.150821 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:12.150827 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:12.150892 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:12.175342 1542350 cri.go:89] found id: ""
	I1213 16:14:12.175367 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.175376 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:12.175386 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:12.175404 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:12.231019 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:12.231066 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:12.247225 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:12.247257 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:12.311535 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:12.303778    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.304209    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.305700    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.306034    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.307505    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:12.303778    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.304209    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.305700    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.306034    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.307505    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:12.311562 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:12.311575 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:12.336385 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:12.336419 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:14.871456 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:14.883637 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:14.883706 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:14.912506 1542350 cri.go:89] found id: ""
	I1213 16:14:14.912530 1542350 logs.go:282] 0 containers: []
	W1213 16:14:14.912539 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:14.912545 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:14.912612 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:14.924965 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:14:14.948875 1542350 cri.go:89] found id: ""
	I1213 16:14:14.948908 1542350 logs.go:282] 0 containers: []
	W1213 16:14:14.948917 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:14.948923 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:14.948983 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	W1213 16:14:15.004427 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:14:15.004545 1542350 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 16:14:15.004879 1542350 cri.go:89] found id: ""
	I1213 16:14:15.004917 1542350 logs.go:282] 0 containers: []
	W1213 16:14:15.005050 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:15.005059 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:15.005129 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:15.016719 1542350 out.go:179] * Enabled addons: 
	I1213 16:14:15.019727 1542350 addons.go:530] duration metric: took 1m53.607875831s for enable addons: enabled=[]
	I1213 16:14:15.061323 1542350 cri.go:89] found id: ""
	I1213 16:14:15.061351 1542350 logs.go:282] 0 containers: []
	W1213 16:14:15.061359 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:15.061366 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:15.061431 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:15.089262 1542350 cri.go:89] found id: ""
	I1213 16:14:15.089290 1542350 logs.go:282] 0 containers: []
	W1213 16:14:15.089310 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:15.089351 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:15.089416 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:15.114964 1542350 cri.go:89] found id: ""
	I1213 16:14:15.114992 1542350 logs.go:282] 0 containers: []
	W1213 16:14:15.115001 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:15.115010 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:15.115087 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:15.150205 1542350 cri.go:89] found id: ""
	I1213 16:14:15.150228 1542350 logs.go:282] 0 containers: []
	W1213 16:14:15.150237 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:15.150243 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:15.150305 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:15.179096 1542350 cri.go:89] found id: ""
	I1213 16:14:15.179124 1542350 logs.go:282] 0 containers: []
	W1213 16:14:15.179159 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:15.179170 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:15.179186 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:15.240671 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:15.240716 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:15.257989 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:15.258020 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:15.327105 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:15.316562    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.317280    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.321065    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.321732    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.323049    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:15.316562    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.317280    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.321065    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.321732    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.323049    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:15.327125 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:15.327139 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:15.356556 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:15.356601 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:17.895435 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:17.906103 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:17.906178 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:17.934229 1542350 cri.go:89] found id: ""
	I1213 16:14:17.934255 1542350 logs.go:282] 0 containers: []
	W1213 16:14:17.934263 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:17.934270 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:17.934329 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:17.960923 1542350 cri.go:89] found id: ""
	I1213 16:14:17.960947 1542350 logs.go:282] 0 containers: []
	W1213 16:14:17.960955 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:17.960980 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:17.961039 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:17.986062 1542350 cri.go:89] found id: ""
	I1213 16:14:17.986096 1542350 logs.go:282] 0 containers: []
	W1213 16:14:17.986105 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:17.986111 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:17.986180 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:18.019636 1542350 cri.go:89] found id: ""
	I1213 16:14:18.019718 1542350 logs.go:282] 0 containers: []
	W1213 16:14:18.019741 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:18.019761 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:18.019858 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:18.046719 1542350 cri.go:89] found id: ""
	I1213 16:14:18.046787 1542350 logs.go:282] 0 containers: []
	W1213 16:14:18.046810 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:18.046829 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:18.046924 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:18.073562 1542350 cri.go:89] found id: ""
	I1213 16:14:18.073641 1542350 logs.go:282] 0 containers: []
	W1213 16:14:18.073665 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:18.073685 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:18.073763 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:18.100968 1542350 cri.go:89] found id: ""
	I1213 16:14:18.101005 1542350 logs.go:282] 0 containers: []
	W1213 16:14:18.101014 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:18.101021 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:18.101086 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:18.127366 1542350 cri.go:89] found id: ""
	I1213 16:14:18.127391 1542350 logs.go:282] 0 containers: []
	W1213 16:14:18.127401 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:18.127410 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:18.127422 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:18.160263 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:18.160289 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:18.217033 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:18.217066 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:18.234115 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:18.234146 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:18.301091 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:18.292902    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.293484    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.295231    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.295655    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.297297    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:18.292902    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.293484    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.295231    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.295655    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.297297    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:18.301112 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:18.301126 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:20.828738 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:20.843249 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:20.843356 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:20.878301 1542350 cri.go:89] found id: ""
	I1213 16:14:20.878326 1542350 logs.go:282] 0 containers: []
	W1213 16:14:20.878335 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:20.878341 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:20.878400 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:20.911841 1542350 cri.go:89] found id: ""
	I1213 16:14:20.911863 1542350 logs.go:282] 0 containers: []
	W1213 16:14:20.911872 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:20.911877 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:20.911937 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:20.938802 1542350 cri.go:89] found id: ""
	I1213 16:14:20.938825 1542350 logs.go:282] 0 containers: []
	W1213 16:14:20.938833 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:20.938839 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:20.938895 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:20.963358 1542350 cri.go:89] found id: ""
	I1213 16:14:20.963382 1542350 logs.go:282] 0 containers: []
	W1213 16:14:20.963395 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:20.963402 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:20.963462 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:20.988428 1542350 cri.go:89] found id: ""
	I1213 16:14:20.988500 1542350 logs.go:282] 0 containers: []
	W1213 16:14:20.988516 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:20.988523 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:20.988586 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:21.015053 1542350 cri.go:89] found id: ""
	I1213 16:14:21.015088 1542350 logs.go:282] 0 containers: []
	W1213 16:14:21.015097 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:21.015104 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:21.015168 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:21.041720 1542350 cri.go:89] found id: ""
	I1213 16:14:21.041747 1542350 logs.go:282] 0 containers: []
	W1213 16:14:21.041761 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:21.041767 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:21.041844 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:21.066333 1542350 cri.go:89] found id: ""
	I1213 16:14:21.066358 1542350 logs.go:282] 0 containers: []
	W1213 16:14:21.066367 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:21.066376 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:21.066390 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:21.092074 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:21.092113 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:21.119921 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:21.119949 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:21.175737 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:21.175772 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:21.192772 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:21.192802 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:21.258320 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:21.248712    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.249237    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.250941    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.251593    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.253749    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:21.248712    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.249237    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.250941    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.251593    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.253749    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:23.760202 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:23.770818 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:23.770889 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:23.797015 1542350 cri.go:89] found id: ""
	I1213 16:14:23.797038 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.797047 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:23.797053 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:23.797113 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:23.822062 1542350 cri.go:89] found id: ""
	I1213 16:14:23.822085 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.822093 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:23.822100 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:23.822158 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:23.874192 1542350 cri.go:89] found id: ""
	I1213 16:14:23.874214 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.874223 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:23.874229 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:23.874286 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:23.900200 1542350 cri.go:89] found id: ""
	I1213 16:14:23.900221 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.900230 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:23.900236 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:23.900296 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:23.926269 1542350 cri.go:89] found id: ""
	I1213 16:14:23.926298 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.926306 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:23.926313 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:23.926373 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:23.953863 1542350 cri.go:89] found id: ""
	I1213 16:14:23.953893 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.953902 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:23.953909 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:23.953978 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:23.978285 1542350 cri.go:89] found id: ""
	I1213 16:14:23.978314 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.978323 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:23.978332 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:23.978392 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:24.004367 1542350 cri.go:89] found id: ""
	I1213 16:14:24.004397 1542350 logs.go:282] 0 containers: []
	W1213 16:14:24.004407 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:24.004418 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:24.004433 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:24.038684 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:24.038715 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:24.093699 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:24.093736 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:24.109888 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:24.109958 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:24.176373 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:24.167217    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.168183    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.169904    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.170492    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.172210    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:24.167217    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.168183    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.169904    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.170492    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.172210    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:24.176410 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:24.176423 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:26.703702 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:26.715414 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:26.715505 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:26.741617 1542350 cri.go:89] found id: ""
	I1213 16:14:26.741644 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.741653 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:26.741660 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:26.741725 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:26.773142 1542350 cri.go:89] found id: ""
	I1213 16:14:26.773166 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.773175 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:26.773180 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:26.773248 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:26.800698 1542350 cri.go:89] found id: ""
	I1213 16:14:26.800770 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.800792 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:26.800812 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:26.800916 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:26.826188 1542350 cri.go:89] found id: ""
	I1213 16:14:26.826213 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.826222 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:26.826228 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:26.826290 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:26.858537 1542350 cri.go:89] found id: ""
	I1213 16:14:26.858564 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.858573 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:26.858579 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:26.858644 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:26.893373 1542350 cri.go:89] found id: ""
	I1213 16:14:26.893401 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.893411 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:26.893417 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:26.893491 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:26.924977 1542350 cri.go:89] found id: ""
	I1213 16:14:26.925004 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.925013 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:26.925019 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:26.925080 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:26.949933 1542350 cri.go:89] found id: ""
	I1213 16:14:26.949962 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.949971 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:26.949980 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:26.949997 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:26.980349 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:26.980380 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:27.038924 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:27.038960 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:27.055463 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:27.055494 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:27.125589 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:27.116928    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.117440    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.119173    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.119547    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.121058    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:27.116928    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.117440    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.119173    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.119547    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.121058    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:27.125608 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:27.125624 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:29.652560 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:29.663991 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:29.664080 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:29.692800 1542350 cri.go:89] found id: ""
	I1213 16:14:29.692825 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.692834 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:29.692841 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:29.692908 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:29.724553 1542350 cri.go:89] found id: ""
	I1213 16:14:29.724585 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.724595 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:29.724603 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:29.724665 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:29.750391 1542350 cri.go:89] found id: ""
	I1213 16:14:29.750460 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.750484 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:29.750502 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:29.750593 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:29.774900 1542350 cri.go:89] found id: ""
	I1213 16:14:29.774968 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.774994 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:29.775012 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:29.775104 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:29.800460 1542350 cri.go:89] found id: ""
	I1213 16:14:29.800503 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.800512 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:29.800518 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:29.800581 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:29.825184 1542350 cri.go:89] found id: ""
	I1213 16:14:29.825261 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.825285 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:29.825305 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:29.825391 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:29.857574 1542350 cri.go:89] found id: ""
	I1213 16:14:29.857604 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.857613 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:29.857619 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:29.857681 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:29.886573 1542350 cri.go:89] found id: ""
	I1213 16:14:29.886602 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.886610 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:29.886620 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:29.886636 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:29.954547 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:29.945729    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.946349    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.948212    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.948764    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.950488    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:29.945729    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.946349    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.948212    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.948764    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.950488    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:29.954614 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:29.954636 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:29.980281 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:29.980318 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:30.020553 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:30.020640 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:30.112248 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:30.112288 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:32.632543 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:32.644615 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:32.644739 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:32.671076 1542350 cri.go:89] found id: ""
	I1213 16:14:32.671103 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.671115 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:32.671124 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:32.671204 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:32.705219 1542350 cri.go:89] found id: ""
	I1213 16:14:32.705245 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.705255 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:32.705264 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:32.705345 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:32.734663 1542350 cri.go:89] found id: ""
	I1213 16:14:32.734764 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.734796 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:32.734826 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:32.734911 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:32.763416 1542350 cri.go:89] found id: ""
	I1213 16:14:32.763441 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.763451 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:32.763457 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:32.763519 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:32.790404 1542350 cri.go:89] found id: ""
	I1213 16:14:32.790478 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.790500 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:32.790519 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:32.790638 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:32.818613 1542350 cri.go:89] found id: ""
	I1213 16:14:32.818699 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.818735 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:32.818773 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:32.818908 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:32.850999 1542350 cri.go:89] found id: ""
	I1213 16:14:32.851029 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.851038 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:32.851050 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:32.851113 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:32.883800 1542350 cri.go:89] found id: ""
	I1213 16:14:32.883828 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.883837 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:32.883846 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:32.883857 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:32.950061 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:32.950111 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:32.967586 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:32.967617 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:33.038320 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:33.029200    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.030090    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.031863    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.032322    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.033913    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:33.029200    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.030090    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.031863    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.032322    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.033913    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:33.038342 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:33.038357 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:33.066098 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:33.066154 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:35.607481 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:35.619526 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:35.619589 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:35.646097 1542350 cri.go:89] found id: ""
	I1213 16:14:35.646120 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.646131 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:35.646137 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:35.646197 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:35.671288 1542350 cri.go:89] found id: ""
	I1213 16:14:35.671349 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.671358 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:35.671364 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:35.671428 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:35.696891 1542350 cri.go:89] found id: ""
	I1213 16:14:35.696915 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.696923 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:35.696930 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:35.696990 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:35.722027 1542350 cri.go:89] found id: ""
	I1213 16:14:35.722049 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.722057 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:35.722063 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:35.722120 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:35.746428 1542350 cri.go:89] found id: ""
	I1213 16:14:35.746450 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.746458 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:35.746465 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:35.746521 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:35.771433 1542350 cri.go:89] found id: ""
	I1213 16:14:35.771456 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.771465 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:35.771471 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:35.771527 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:35.795226 1542350 cri.go:89] found id: ""
	I1213 16:14:35.795292 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.795408 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:35.795422 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:35.795494 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:35.819205 1542350 cri.go:89] found id: ""
	I1213 16:14:35.819237 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.819246 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:35.819256 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:35.819268 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:35.856667 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:35.856698 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:35.921282 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:35.921317 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:35.937351 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:35.937379 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:36.013024 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:35.998154    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:35.998523    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:36.000027    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:36.000325    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:36.001562    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:35.998154    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:35.998523    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:36.000027    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:36.000325    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:36.001562    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:36.013050 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:36.013065 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:38.540010 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:38.553894 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:38.553969 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:38.587080 1542350 cri.go:89] found id: ""
	I1213 16:14:38.587102 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.587110 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:38.587116 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:38.587180 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:38.615796 1542350 cri.go:89] found id: ""
	I1213 16:14:38.615820 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.615829 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:38.615835 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:38.615895 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:38.652609 1542350 cri.go:89] found id: ""
	I1213 16:14:38.652634 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.652643 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:38.652649 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:38.652706 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:38.681712 1542350 cri.go:89] found id: ""
	I1213 16:14:38.681738 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.681747 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:38.681753 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:38.681812 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:38.707047 1542350 cri.go:89] found id: ""
	I1213 16:14:38.707076 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.707085 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:38.707091 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:38.707154 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:38.731834 1542350 cri.go:89] found id: ""
	I1213 16:14:38.731868 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.731878 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:38.731884 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:38.731951 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:38.755752 1542350 cri.go:89] found id: ""
	I1213 16:14:38.755816 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.755838 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:38.755855 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:38.755940 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:38.780290 1542350 cri.go:89] found id: ""
	I1213 16:14:38.780316 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.780325 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:38.780335 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:38.780354 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:38.837581 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:38.837613 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:38.855100 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:38.855130 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:38.927088 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:38.917976    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.918723    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.920509    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.921087    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.922821    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:38.917976    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.918723    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.920509    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.921087    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.922821    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:38.927155 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:38.927178 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:38.952089 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:38.952127 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:41.483644 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:41.494493 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:41.494574 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:41.518966 1542350 cri.go:89] found id: ""
	I1213 16:14:41.518988 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.518996 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:41.519002 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:41.519066 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:41.545695 1542350 cri.go:89] found id: ""
	I1213 16:14:41.545720 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.545729 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:41.545734 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:41.545798 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:41.571565 1542350 cri.go:89] found id: ""
	I1213 16:14:41.571591 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.571600 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:41.571606 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:41.571673 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:41.619450 1542350 cri.go:89] found id: ""
	I1213 16:14:41.619473 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.619482 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:41.619488 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:41.619548 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:41.653736 1542350 cri.go:89] found id: ""
	I1213 16:14:41.653757 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.653766 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:41.653773 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:41.653835 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:41.682235 1542350 cri.go:89] found id: ""
	I1213 16:14:41.682257 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.682265 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:41.682272 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:41.682332 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:41.708453 1542350 cri.go:89] found id: ""
	I1213 16:14:41.708475 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.708489 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:41.708496 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:41.708554 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:41.737148 1542350 cri.go:89] found id: ""
	I1213 16:14:41.737171 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.737179 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:41.737193 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:41.737205 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:41.792082 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:41.792120 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:41.808566 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:41.808597 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:41.888202 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:41.877628    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.878620    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.880407    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.881012    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.883935    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:41.877628    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.878620    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.880407    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.881012    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.883935    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:41.888226 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:41.888238 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:41.913429 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:41.913466 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:44.445881 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:44.456550 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:44.456627 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:44.482008 1542350 cri.go:89] found id: ""
	I1213 16:14:44.482031 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.482039 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:44.482045 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:44.482103 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:44.507630 1542350 cri.go:89] found id: ""
	I1213 16:14:44.507654 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.507662 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:44.507668 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:44.507729 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:44.536680 1542350 cri.go:89] found id: ""
	I1213 16:14:44.536704 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.536713 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:44.536719 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:44.536778 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:44.565166 1542350 cri.go:89] found id: ""
	I1213 16:14:44.565189 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.565199 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:44.565205 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:44.565265 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:44.598174 1542350 cri.go:89] found id: ""
	I1213 16:14:44.598197 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.598206 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:44.598214 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:44.598280 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:44.640061 1542350 cri.go:89] found id: ""
	I1213 16:14:44.640084 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.640092 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:44.640099 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:44.640159 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:44.671940 1542350 cri.go:89] found id: ""
	I1213 16:14:44.671968 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.671976 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:44.671982 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:44.672044 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:44.698885 1542350 cri.go:89] found id: ""
	I1213 16:14:44.698907 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.698916 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:44.698925 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:44.698939 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:44.715019 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:44.715090 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:44.777959 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:44.769893    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.770508    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.772034    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.772476    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.773993    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:44.769893    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.770508    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.772034    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.772476    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.773993    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:44.777983 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:44.777996 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:44.803994 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:44.804031 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:44.835446 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:44.835476 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:47.402282 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:47.413184 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:47.413252 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:47.439678 1542350 cri.go:89] found id: ""
	I1213 16:14:47.439702 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.439710 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:47.439717 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:47.439777 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:47.469694 1542350 cri.go:89] found id: ""
	I1213 16:14:47.469720 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.469728 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:47.469734 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:47.469797 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:47.495280 1542350 cri.go:89] found id: ""
	I1213 16:14:47.495306 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.495339 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:47.495346 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:47.495408 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:47.525092 1542350 cri.go:89] found id: ""
	I1213 16:14:47.525118 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.525127 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:47.525133 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:47.525194 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:47.551755 1542350 cri.go:89] found id: ""
	I1213 16:14:47.551782 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.551790 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:47.551797 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:47.551858 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:47.577368 1542350 cri.go:89] found id: ""
	I1213 16:14:47.577393 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.577402 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:47.577408 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:47.577479 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:47.603993 1542350 cri.go:89] found id: ""
	I1213 16:14:47.604016 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.604024 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:47.604030 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:47.604095 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:47.634166 1542350 cri.go:89] found id: ""
	I1213 16:14:47.634188 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.634197 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:47.634206 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:47.634217 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:47.698875 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:47.698911 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:47.715548 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:47.715580 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:47.783485 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:47.774772    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.775432    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.777207    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.777881    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.779547    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:47.774772    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.775432    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.777207    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.777881    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.779547    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:47.783508 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:47.783521 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:47.809639 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:47.809672 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:50.342353 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:50.355175 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:50.355303 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:50.381034 1542350 cri.go:89] found id: ""
	I1213 16:14:50.381066 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.381076 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:50.381084 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:50.381166 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:50.409181 1542350 cri.go:89] found id: ""
	I1213 16:14:50.409208 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.409217 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:50.409222 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:50.409286 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:50.438419 1542350 cri.go:89] found id: ""
	I1213 16:14:50.438451 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.438460 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:50.438466 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:50.438525 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:50.468687 1542350 cri.go:89] found id: ""
	I1213 16:14:50.468713 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.468721 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:50.468728 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:50.468789 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:50.498096 1542350 cri.go:89] found id: ""
	I1213 16:14:50.498163 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.498187 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:50.498205 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:50.498292 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:50.523754 1542350 cri.go:89] found id: ""
	I1213 16:14:50.523820 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.523835 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:50.523843 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:50.523902 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:50.555302 1542350 cri.go:89] found id: ""
	I1213 16:14:50.555387 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.555403 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:50.555410 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:50.555477 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:50.581005 1542350 cri.go:89] found id: ""
	I1213 16:14:50.581035 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.581044 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:50.581054 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:50.581067 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:50.611931 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:50.612005 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:50.650728 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:50.650754 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:50.709840 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:50.709878 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:50.729613 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:50.729711 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:50.796424 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:50.788003    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.788585    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.790191    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.790714    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.792323    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:50.788003    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.788585    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.790191    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.790714    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.792323    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:53.298328 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:53.309106 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:53.309178 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:53.333481 1542350 cri.go:89] found id: ""
	I1213 16:14:53.333513 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.333523 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:53.333529 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:53.333590 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:53.358898 1542350 cri.go:89] found id: ""
	I1213 16:14:53.358923 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.358932 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:53.358938 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:53.358999 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:53.384286 1542350 cri.go:89] found id: ""
	I1213 16:14:53.384311 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.384322 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:53.384329 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:53.384388 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:53.408999 1542350 cri.go:89] found id: ""
	I1213 16:14:53.409022 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.409031 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:53.409037 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:53.409102 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:53.437666 1542350 cri.go:89] found id: ""
	I1213 16:14:53.437688 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.437696 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:53.437703 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:53.437764 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:53.462775 1542350 cri.go:89] found id: ""
	I1213 16:14:53.462868 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.462885 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:53.462893 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:53.462955 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:53.489379 1542350 cri.go:89] found id: ""
	I1213 16:14:53.489403 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.489413 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:53.489419 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:53.489479 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:53.513660 1542350 cri.go:89] found id: ""
	I1213 16:14:53.513683 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.513691 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:53.513701 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:53.513711 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:53.544644 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:53.544670 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:53.603653 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:53.603733 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:53.620761 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:53.620846 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:53.694809 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:53.685787    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.686311    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.688155    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.688956    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.690579    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:53.685787    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.686311    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.688155    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.688956    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.690579    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:53.694871 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:53.694886 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:56.222442 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:56.233418 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:56.233521 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:56.262552 1542350 cri.go:89] found id: ""
	I1213 16:14:56.262578 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.262587 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:56.262594 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:56.262677 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:56.290583 1542350 cri.go:89] found id: ""
	I1213 16:14:56.290611 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.290620 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:56.290627 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:56.290778 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:56.316264 1542350 cri.go:89] found id: ""
	I1213 16:14:56.316292 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.316300 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:56.316306 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:56.316366 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:56.341047 1542350 cri.go:89] found id: ""
	I1213 16:14:56.341072 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.341080 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:56.341086 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:56.341163 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:56.369874 1542350 cri.go:89] found id: ""
	I1213 16:14:56.369909 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.369918 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:56.369924 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:56.369993 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:56.396373 1542350 cri.go:89] found id: ""
	I1213 16:14:56.396400 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.396408 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:56.396415 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:56.396480 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:56.421264 1542350 cri.go:89] found id: ""
	I1213 16:14:56.421286 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.421294 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:56.421300 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:56.421362 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:56.449683 1542350 cri.go:89] found id: ""
	I1213 16:14:56.449708 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.449717 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:56.449727 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:56.449740 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:56.513612 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:56.505286    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.506108    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.507846    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.508199    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.509740    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:56.505286    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.506108    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.507846    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.508199    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.509740    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:56.513635 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:56.513648 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:56.539159 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:56.539193 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:56.569885 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:56.569913 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:56.636667 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:56.636712 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:59.161215 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:59.172070 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:59.172139 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:59.196977 1542350 cri.go:89] found id: ""
	I1213 16:14:59.197003 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.197013 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:59.197019 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:59.197124 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:59.222813 1542350 cri.go:89] found id: ""
	I1213 16:14:59.222839 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.222849 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:59.222855 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:59.222921 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:59.249285 1542350 cri.go:89] found id: ""
	I1213 16:14:59.249309 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.249317 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:59.249323 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:59.249385 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:59.275052 1542350 cri.go:89] found id: ""
	I1213 16:14:59.275076 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.275085 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:59.275091 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:59.275152 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:59.301297 1542350 cri.go:89] found id: ""
	I1213 16:14:59.301323 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.301331 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:59.301337 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:59.301395 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:59.326556 1542350 cri.go:89] found id: ""
	I1213 16:14:59.326582 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.326591 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:59.326599 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:59.326658 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:59.360044 1542350 cri.go:89] found id: ""
	I1213 16:14:59.360070 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.360079 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:59.360085 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:59.360145 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:59.385355 1542350 cri.go:89] found id: ""
	I1213 16:14:59.385380 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.385389 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:59.385398 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:59.385410 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:59.441005 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:59.441040 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:59.456936 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:59.456968 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:59.523389 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:59.514843    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.515544    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.517163    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.517739    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.519447    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:59.514843    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.515544    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.517163    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.517739    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.519447    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:59.523410 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:59.523423 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:59.548680 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:59.548717 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:02.077266 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:02.091997 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:02.092082 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:02.125051 1542350 cri.go:89] found id: ""
	I1213 16:15:02.125079 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.125088 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:02.125095 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:02.125158 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:02.155518 1542350 cri.go:89] found id: ""
	I1213 16:15:02.155547 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.155555 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:02.155567 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:02.155626 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:02.180408 1542350 cri.go:89] found id: ""
	I1213 16:15:02.180435 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.180444 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:02.180450 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:02.180541 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:02.206923 1542350 cri.go:89] found id: ""
	I1213 16:15:02.206957 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.206966 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:02.206979 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:02.207049 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:02.234308 1542350 cri.go:89] found id: ""
	I1213 16:15:02.234332 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.234341 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:02.234347 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:02.234412 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:02.260647 1542350 cri.go:89] found id: ""
	I1213 16:15:02.260671 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.260680 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:02.260686 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:02.260746 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:02.287051 1542350 cri.go:89] found id: ""
	I1213 16:15:02.287075 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.287083 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:02.287089 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:02.287151 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:02.313703 1542350 cri.go:89] found id: ""
	I1213 16:15:02.313726 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.313734 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:02.313744 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:02.313755 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:02.369628 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:02.369663 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:02.385814 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:02.385896 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:02.450440 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:02.441569    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.442433    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.443449    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.445136    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.445445    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:02.441569    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.442433    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.443449    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.445136    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.445445    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:02.450460 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:02.450475 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:02.475994 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:02.476032 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:05.008952 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:05.023767 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:05.023852 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:05.048943 1542350 cri.go:89] found id: ""
	I1213 16:15:05.048970 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.048979 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:05.048985 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:05.049046 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:05.073030 1542350 cri.go:89] found id: ""
	I1213 16:15:05.073057 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.073066 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:05.073072 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:05.073141 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:05.113695 1542350 cri.go:89] found id: ""
	I1213 16:15:05.113724 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.113733 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:05.113739 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:05.113798 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:05.143435 1542350 cri.go:89] found id: ""
	I1213 16:15:05.143462 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.143471 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:05.143476 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:05.143533 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:05.169643 1542350 cri.go:89] found id: ""
	I1213 16:15:05.169672 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.169682 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:05.169694 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:05.169756 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:05.194836 1542350 cri.go:89] found id: ""
	I1213 16:15:05.194865 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.194874 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:05.194881 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:05.194939 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:05.223183 1542350 cri.go:89] found id: ""
	I1213 16:15:05.223208 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.223216 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:05.223223 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:05.223284 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:05.247344 1542350 cri.go:89] found id: ""
	I1213 16:15:05.247368 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.247377 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:05.247386 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:05.247400 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:05.302110 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:05.302144 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:05.318507 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:05.318537 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:05.383855 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:05.375536    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.376623    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.377525    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.378529    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.380053    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:05.375536    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.376623    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.377525    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.378529    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.380053    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:05.383878 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:05.383891 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:05.408947 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:05.408984 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:07.939749 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:07.950076 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:07.950150 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:07.975327 1542350 cri.go:89] found id: ""
	I1213 16:15:07.975351 1542350 logs.go:282] 0 containers: []
	W1213 16:15:07.975360 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:07.975366 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:07.975423 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:07.999830 1542350 cri.go:89] found id: ""
	I1213 16:15:07.999856 1542350 logs.go:282] 0 containers: []
	W1213 16:15:07.999864 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:07.999870 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:07.999928 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:08.026521 1542350 cri.go:89] found id: ""
	I1213 16:15:08.026547 1542350 logs.go:282] 0 containers: []
	W1213 16:15:08.026556 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:08.026562 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:08.026627 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:08.053320 1542350 cri.go:89] found id: ""
	I1213 16:15:08.053343 1542350 logs.go:282] 0 containers: []
	W1213 16:15:08.053352 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:08.053358 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:08.053418 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:08.084631 1542350 cri.go:89] found id: ""
	I1213 16:15:08.084654 1542350 logs.go:282] 0 containers: []
	W1213 16:15:08.084663 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:08.084669 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:08.084727 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:08.115761 1542350 cri.go:89] found id: ""
	I1213 16:15:08.115842 1542350 logs.go:282] 0 containers: []
	W1213 16:15:08.115866 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:08.115884 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:08.115992 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:08.143108 1542350 cri.go:89] found id: ""
	I1213 16:15:08.143131 1542350 logs.go:282] 0 containers: []
	W1213 16:15:08.143141 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:08.143150 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:08.143210 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:08.169485 1542350 cri.go:89] found id: ""
	I1213 16:15:08.169548 1542350 logs.go:282] 0 containers: []
	W1213 16:15:08.169571 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:08.169593 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:08.169632 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:08.186535 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:08.186608 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:08.254187 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:08.245289    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.245832    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.247630    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.248117    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.249849    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:08.245289    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.245832    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.247630    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.248117    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.249849    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:08.254252 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:08.254277 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:08.279498 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:08.279538 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:08.307012 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:08.307040 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:10.863431 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:10.875836 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:10.875902 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:10.902828 1542350 cri.go:89] found id: ""
	I1213 16:15:10.902850 1542350 logs.go:282] 0 containers: []
	W1213 16:15:10.902859 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:10.902864 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:10.902924 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:10.927709 1542350 cri.go:89] found id: ""
	I1213 16:15:10.927732 1542350 logs.go:282] 0 containers: []
	W1213 16:15:10.927741 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:10.927747 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:10.927807 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:10.952424 1542350 cri.go:89] found id: ""
	I1213 16:15:10.952448 1542350 logs.go:282] 0 containers: []
	W1213 16:15:10.952457 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:10.952466 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:10.952544 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:10.977056 1542350 cri.go:89] found id: ""
	I1213 16:15:10.977087 1542350 logs.go:282] 0 containers: []
	W1213 16:15:10.977095 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:10.977101 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:10.977163 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:11.006742 1542350 cri.go:89] found id: ""
	I1213 16:15:11.006767 1542350 logs.go:282] 0 containers: []
	W1213 16:15:11.006776 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:11.006782 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:11.006857 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:11.033448 1542350 cri.go:89] found id: ""
	I1213 16:15:11.033471 1542350 logs.go:282] 0 containers: []
	W1213 16:15:11.033481 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:11.033491 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:11.033549 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:11.058288 1542350 cri.go:89] found id: ""
	I1213 16:15:11.058319 1542350 logs.go:282] 0 containers: []
	W1213 16:15:11.058329 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:11.058335 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:11.058403 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:11.086206 1542350 cri.go:89] found id: ""
	I1213 16:15:11.086229 1542350 logs.go:282] 0 containers: []
	W1213 16:15:11.086238 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:11.086248 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:11.086260 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:11.149204 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:11.149250 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:11.169208 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:11.169240 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:11.239824 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:11.230926    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.231789    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.233414    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.233896    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.235615    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:11.230926    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.231789    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.233414    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.233896    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.235615    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:11.239888 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:11.239913 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:11.265156 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:11.265190 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:13.793650 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:13.804879 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:13.804957 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:13.830496 1542350 cri.go:89] found id: ""
	I1213 16:15:13.830524 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.830534 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:13.830541 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:13.830598 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:13.860289 1542350 cri.go:89] found id: ""
	I1213 16:15:13.860316 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.860325 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:13.860331 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:13.860404 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:13.889862 1542350 cri.go:89] found id: ""
	I1213 16:15:13.889900 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.889909 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:13.889915 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:13.889982 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:13.917096 1542350 cri.go:89] found id: ""
	I1213 16:15:13.917119 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.917127 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:13.917134 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:13.917192 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:13.941374 1542350 cri.go:89] found id: ""
	I1213 16:15:13.941397 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.941406 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:13.941412 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:13.941472 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:13.966429 1542350 cri.go:89] found id: ""
	I1213 16:15:13.966457 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.966467 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:13.966474 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:13.966536 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:13.992124 1542350 cri.go:89] found id: ""
	I1213 16:15:13.992193 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.992217 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:13.992231 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:13.992304 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:14.018581 1542350 cri.go:89] found id: ""
	I1213 16:15:14.018613 1542350 logs.go:282] 0 containers: []
	W1213 16:15:14.018621 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:14.018631 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:14.018643 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:14.076560 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:14.076594 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:14.093391 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:14.093470 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:14.169809 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:14.161020    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.162081    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.163786    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.164233    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.165949    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:14.161020    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.162081    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.163786    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.164233    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.165949    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:14.169831 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:14.169844 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:14.196553 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:14.196588 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:16.730383 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:16.741020 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:16.741091 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:16.765402 1542350 cri.go:89] found id: ""
	I1213 16:15:16.765425 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.765434 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:16.765440 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:16.765498 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:16.791004 1542350 cri.go:89] found id: ""
	I1213 16:15:16.791033 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.791042 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:16.791048 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:16.791112 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:16.816897 1542350 cri.go:89] found id: ""
	I1213 16:15:16.816925 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.816933 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:16.816939 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:16.817002 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:16.861774 1542350 cri.go:89] found id: ""
	I1213 16:15:16.861796 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.861803 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:16.861809 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:16.861868 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:16.895555 1542350 cri.go:89] found id: ""
	I1213 16:15:16.895575 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.895584 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:16.895589 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:16.895650 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:16.923607 1542350 cri.go:89] found id: ""
	I1213 16:15:16.923630 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.923638 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:16.923644 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:16.923705 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:16.952569 1542350 cri.go:89] found id: ""
	I1213 16:15:16.952602 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.952612 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:16.952618 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:16.952681 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:16.982597 1542350 cri.go:89] found id: ""
	I1213 16:15:16.982625 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.982634 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:16.982644 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:16.982657 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:17.040379 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:17.040417 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:17.056673 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:17.056703 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:17.155960 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:17.135813    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.147685    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.148473    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.150347    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.150872    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:17.135813    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.147685    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.148473    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.150347    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.150872    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:17.155984 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:17.155997 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:17.181703 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:17.181742 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:19.710412 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:19.723576 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:19.723654 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:19.752079 1542350 cri.go:89] found id: ""
	I1213 16:15:19.752102 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.752111 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:19.752117 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:19.752198 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:19.776763 1542350 cri.go:89] found id: ""
	I1213 16:15:19.776829 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.776845 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:19.776853 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:19.776912 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:19.803069 1542350 cri.go:89] found id: ""
	I1213 16:15:19.803133 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.803149 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:19.803157 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:19.803216 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:19.828299 1542350 cri.go:89] found id: ""
	I1213 16:15:19.828332 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.828342 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:19.828348 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:19.828419 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:19.858915 1542350 cri.go:89] found id: ""
	I1213 16:15:19.858992 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.859013 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:19.859032 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:19.859127 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:19.889950 1542350 cri.go:89] found id: ""
	I1213 16:15:19.889987 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.889996 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:19.890003 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:19.890076 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:19.915855 1542350 cri.go:89] found id: ""
	I1213 16:15:19.915879 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.915893 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:19.915899 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:19.915958 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:19.945371 1542350 cri.go:89] found id: ""
	I1213 16:15:19.945409 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.945418 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:19.945460 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:19.945484 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:20.004545 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:20.004586 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:20.030075 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:20.030110 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:20.119134 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:20.105779    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.107335    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.108625    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.110278    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.111655    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:20.105779    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.107335    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.108625    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.110278    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.111655    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:20.119228 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:20.119426 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:20.157972 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:20.158017 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:22.690836 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:22.701577 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:22.701651 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:22.725883 1542350 cri.go:89] found id: ""
	I1213 16:15:22.725908 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.725917 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:22.725922 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:22.725980 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:22.750347 1542350 cri.go:89] found id: ""
	I1213 16:15:22.750373 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.750382 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:22.750388 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:22.750446 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:22.773604 1542350 cri.go:89] found id: ""
	I1213 16:15:22.773627 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.773636 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:22.773642 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:22.773699 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:22.798122 1542350 cri.go:89] found id: ""
	I1213 16:15:22.798144 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.798153 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:22.798159 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:22.798216 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:22.825364 1542350 cri.go:89] found id: ""
	I1213 16:15:22.825386 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.825394 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:22.825400 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:22.825463 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:22.860458 1542350 cri.go:89] found id: ""
	I1213 16:15:22.860480 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.860489 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:22.860503 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:22.860560 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:22.888782 1542350 cri.go:89] found id: ""
	I1213 16:15:22.888865 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.888889 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:22.888907 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:22.888991 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:22.917264 1542350 cri.go:89] found id: ""
	I1213 16:15:22.917288 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.917297 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:22.917306 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:22.917318 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:22.947808 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:22.947850 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:23.002868 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:23.002910 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:23.019957 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:23.019988 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:23.095906 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:23.076845    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.077548    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.079269    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.079937    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.083952    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:23.076845    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.077548    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.079269    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.079937    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.083952    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:23.095985 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:23.096017 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:25.625418 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:25.636179 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:25.636256 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:25.660796 1542350 cri.go:89] found id: ""
	I1213 16:15:25.660819 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.660827 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:25.660833 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:25.660890 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:25.692137 1542350 cri.go:89] found id: ""
	I1213 16:15:25.692161 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.692169 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:25.692175 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:25.692234 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:25.722645 1542350 cri.go:89] found id: ""
	I1213 16:15:25.722667 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.722677 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:25.722683 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:25.722741 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:25.746597 1542350 cri.go:89] found id: ""
	I1213 16:15:25.746619 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.746627 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:25.746633 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:25.746690 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:25.773364 1542350 cri.go:89] found id: ""
	I1213 16:15:25.773391 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.773399 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:25.773405 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:25.773464 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:25.798024 1542350 cri.go:89] found id: ""
	I1213 16:15:25.798047 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.798056 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:25.798062 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:25.798140 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:25.824949 1542350 cri.go:89] found id: ""
	I1213 16:15:25.824975 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.824984 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:25.824989 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:25.825065 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:25.851736 1542350 cri.go:89] found id: ""
	I1213 16:15:25.851809 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.851843 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:25.851869 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:25.851910 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:25.868875 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:25.868902 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:25.941457 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:25.933124    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.933785    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.935483    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.935919    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.937566    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:25.933124    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.933785    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.935483    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.935919    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.937566    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:25.941527 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:25.941548 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:25.966625 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:25.966656 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:25.996976 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:25.997004 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:28.556122 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:28.567257 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:28.567352 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:28.592087 1542350 cri.go:89] found id: ""
	I1213 16:15:28.592153 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.592179 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:28.592196 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:28.592293 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:28.616658 1542350 cri.go:89] found id: ""
	I1213 16:15:28.616731 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.616746 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:28.616753 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:28.616822 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:28.640310 1542350 cri.go:89] found id: ""
	I1213 16:15:28.640335 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.640344 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:28.640349 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:28.640412 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:28.665406 1542350 cri.go:89] found id: ""
	I1213 16:15:28.665433 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.665443 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:28.665449 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:28.665508 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:28.690028 1542350 cri.go:89] found id: ""
	I1213 16:15:28.690090 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.690121 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:28.690143 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:28.690247 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:28.714656 1542350 cri.go:89] found id: ""
	I1213 16:15:28.714719 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.714753 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:28.714775 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:28.714862 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:28.741721 1542350 cri.go:89] found id: ""
	I1213 16:15:28.741745 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.741753 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:28.741759 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:28.741860 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:28.770039 1542350 cri.go:89] found id: ""
	I1213 16:15:28.770106 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.770132 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:28.770153 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:28.770191 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:28.794482 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:28.794514 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:28.825722 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:28.825751 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:28.885792 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:28.885826 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:28.902629 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:28.902658 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:28.968699 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:28.960883    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.961407    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.962907    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.963230    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.964863    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:28.960883    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.961407    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.962907    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.963230    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.964863    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:31.469803 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:31.480479 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:31.480600 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:31.512783 1542350 cri.go:89] found id: ""
	I1213 16:15:31.512807 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.512816 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:31.512823 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:31.512881 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:31.539773 1542350 cri.go:89] found id: ""
	I1213 16:15:31.539800 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.539815 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:31.539836 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:31.539915 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:31.564690 1542350 cri.go:89] found id: ""
	I1213 16:15:31.564715 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.564723 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:31.564729 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:31.564791 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:31.589449 1542350 cri.go:89] found id: ""
	I1213 16:15:31.589476 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.589484 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:31.589490 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:31.589550 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:31.614171 1542350 cri.go:89] found id: ""
	I1213 16:15:31.614203 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.614212 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:31.614218 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:31.614278 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:31.641466 1542350 cri.go:89] found id: ""
	I1213 16:15:31.641489 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.641498 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:31.641505 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:31.641563 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:31.665618 1542350 cri.go:89] found id: ""
	I1213 16:15:31.665641 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.665649 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:31.665656 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:31.665715 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:31.694436 1542350 cri.go:89] found id: ""
	I1213 16:15:31.694531 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.694554 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:31.694589 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:31.694621 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:31.720014 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:31.720047 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:31.746773 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:31.746844 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:31.802034 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:31.802070 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:31.819067 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:31.819096 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:31.926406 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:31.917414    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.918134    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.919118    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.920759    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.921377    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:31.917414    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.918134    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.919118    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.920759    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.921377    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:34.427501 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:34.438467 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:34.438539 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:34.469663 1542350 cri.go:89] found id: ""
	I1213 16:15:34.469685 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.469693 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:34.469699 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:34.469763 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:34.497352 1542350 cri.go:89] found id: ""
	I1213 16:15:34.497375 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.497384 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:34.497391 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:34.497449 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:34.522437 1542350 cri.go:89] found id: ""
	I1213 16:15:34.522462 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.522471 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:34.522477 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:34.522533 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:34.546310 1542350 cri.go:89] found id: ""
	I1213 16:15:34.546335 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.546344 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:34.546350 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:34.546410 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:34.570057 1542350 cri.go:89] found id: ""
	I1213 16:15:34.570082 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.570091 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:34.570097 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:34.570154 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:34.597335 1542350 cri.go:89] found id: ""
	I1213 16:15:34.597360 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.597369 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:34.597375 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:34.597438 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:34.622402 1542350 cri.go:89] found id: ""
	I1213 16:15:34.622426 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.622435 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:34.622441 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:34.622501 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:34.647379 1542350 cri.go:89] found id: ""
	I1213 16:15:34.647405 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.647414 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:34.647423 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:34.647435 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:34.707433 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:34.699526    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.700203    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.701513    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.701944    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.703518    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:34.699526    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.700203    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.701513    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.701944    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.703518    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:34.707452 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:34.707464 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:34.732617 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:34.732650 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:34.760551 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:34.760579 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:34.817043 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:34.817078 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:37.335446 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:37.346358 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:37.346480 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:37.375693 1542350 cri.go:89] found id: ""
	I1213 16:15:37.375763 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.375784 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:37.375803 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:37.375896 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:37.401729 1542350 cri.go:89] found id: ""
	I1213 16:15:37.401753 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.401761 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:37.401768 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:37.401832 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:37.426557 1542350 cri.go:89] found id: ""
	I1213 16:15:37.426583 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.426591 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:37.426597 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:37.426659 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:37.452633 1542350 cri.go:89] found id: ""
	I1213 16:15:37.452658 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.452666 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:37.452672 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:37.452731 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:37.476262 1542350 cri.go:89] found id: ""
	I1213 16:15:37.476287 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.476296 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:37.476302 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:37.476388 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:37.501165 1542350 cri.go:89] found id: ""
	I1213 16:15:37.501190 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.501198 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:37.501204 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:37.501285 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:37.524960 1542350 cri.go:89] found id: ""
	I1213 16:15:37.524983 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.524991 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:37.524997 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:37.525055 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:37.550053 1542350 cri.go:89] found id: ""
	I1213 16:15:37.550079 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.550088 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:37.550097 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:37.550109 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:37.613799 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:37.604980    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.605842    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.607596    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.608285    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.609930    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:37.604980    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.605842    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.607596    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.608285    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.609930    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:37.613824 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:37.613837 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:37.638525 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:37.638559 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:37.665937 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:37.665965 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:37.722593 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:37.722628 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:40.238420 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:40.249230 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:40.249314 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:40.273014 1542350 cri.go:89] found id: ""
	I1213 16:15:40.273089 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.273133 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:40.273147 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:40.273227 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:40.298488 1542350 cri.go:89] found id: ""
	I1213 16:15:40.298553 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.298577 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:40.298595 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:40.298679 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:40.323131 1542350 cri.go:89] found id: ""
	I1213 16:15:40.323204 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.323228 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:40.323246 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:40.323368 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:40.360968 1542350 cri.go:89] found id: ""
	I1213 16:15:40.360996 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.361005 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:40.361011 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:40.361081 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:40.392530 1542350 cri.go:89] found id: ""
	I1213 16:15:40.392564 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.392573 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:40.392580 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:40.392648 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:40.428563 1542350 cri.go:89] found id: ""
	I1213 16:15:40.428588 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.428597 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:40.428603 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:40.428686 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:40.453234 1542350 cri.go:89] found id: ""
	I1213 16:15:40.453259 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.453267 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:40.453274 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:40.453373 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:40.477074 1542350 cri.go:89] found id: ""
	I1213 16:15:40.477099 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.477108 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:40.477117 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:40.477144 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:40.503301 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:40.503521 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:40.537464 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:40.537493 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:40.593489 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:40.593526 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:40.609479 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:40.609507 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:40.674540 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:40.665852    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.666546    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.668621    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.669211    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.670710    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:40.665852    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.666546    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.668621    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.669211    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.670710    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:43.175524 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:43.186492 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:43.186570 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:43.210685 1542350 cri.go:89] found id: ""
	I1213 16:15:43.210712 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.210721 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:43.210728 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:43.210787 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:43.237076 1542350 cri.go:89] found id: ""
	I1213 16:15:43.237103 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.237112 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:43.237118 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:43.237177 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:43.264682 1542350 cri.go:89] found id: ""
	I1213 16:15:43.264756 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.264771 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:43.264778 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:43.264842 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:43.290869 1542350 cri.go:89] found id: ""
	I1213 16:15:43.290896 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.290905 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:43.290912 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:43.290976 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:43.316279 1542350 cri.go:89] found id: ""
	I1213 16:15:43.316306 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.316315 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:43.316322 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:43.316383 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:43.354838 1542350 cri.go:89] found id: ""
	I1213 16:15:43.354864 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.354873 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:43.354880 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:43.354957 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:43.391172 1542350 cri.go:89] found id: ""
	I1213 16:15:43.391198 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.391207 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:43.391213 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:43.391274 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:43.418613 1542350 cri.go:89] found id: ""
	I1213 16:15:43.418647 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.418657 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:43.418667 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:43.418680 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:43.435343 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:43.435384 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:43.503984 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:43.495327    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.495856    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.497696    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.498105    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.499619    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:43.495327    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.495856    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.497696    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.498105    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.499619    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:43.504005 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:43.504018 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:43.530844 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:43.530882 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:43.563046 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:43.563079 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:46.121764 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:46.133205 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:46.133278 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:46.159902 1542350 cri.go:89] found id: ""
	I1213 16:15:46.159926 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.159935 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:46.159941 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:46.160016 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:46.189203 1542350 cri.go:89] found id: ""
	I1213 16:15:46.189236 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.189260 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:46.189267 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:46.189336 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:46.214186 1542350 cri.go:89] found id: ""
	I1213 16:15:46.214208 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.214216 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:46.214222 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:46.214281 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:46.244894 1542350 cri.go:89] found id: ""
	I1213 16:15:46.244923 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.244943 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:46.244949 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:46.245015 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:46.270668 1542350 cri.go:89] found id: ""
	I1213 16:15:46.270693 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.270702 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:46.270708 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:46.270771 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:46.296520 1542350 cri.go:89] found id: ""
	I1213 16:15:46.296565 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.296595 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:46.296603 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:46.296684 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:46.322387 1542350 cri.go:89] found id: ""
	I1213 16:15:46.322410 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.322418 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:46.322424 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:46.322492 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:46.359071 1542350 cri.go:89] found id: ""
	I1213 16:15:46.359093 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.359102 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:46.359111 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:46.359121 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:46.397696 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:46.397772 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:46.453341 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:46.453386 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:46.469917 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:46.469945 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:46.531639 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:46.523791    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.524434    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.526137    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.526426    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.527840    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:46.523791    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.524434    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.526137    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.526426    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.527840    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:46.531665 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:46.531678 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:49.058136 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:49.069039 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:49.069109 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:49.103600 1542350 cri.go:89] found id: ""
	I1213 16:15:49.103622 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.103630 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:49.103637 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:49.103694 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:49.133756 1542350 cri.go:89] found id: ""
	I1213 16:15:49.133778 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.133787 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:49.133793 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:49.133850 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:49.159824 1542350 cri.go:89] found id: ""
	I1213 16:15:49.159847 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.159856 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:49.159862 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:49.159919 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:49.188461 1542350 cri.go:89] found id: ""
	I1213 16:15:49.188527 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.188567 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:49.188598 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:49.188677 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:49.212316 1542350 cri.go:89] found id: ""
	I1213 16:15:49.212338 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.212346 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:49.212352 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:49.212424 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:49.236324 1542350 cri.go:89] found id: ""
	I1213 16:15:49.236348 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.236356 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:49.236362 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:49.236423 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:49.262438 1542350 cri.go:89] found id: ""
	I1213 16:15:49.262475 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.262484 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:49.262491 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:49.262578 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:49.292613 1542350 cri.go:89] found id: ""
	I1213 16:15:49.292637 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.292646 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:49.292655 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:49.292667 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:49.350224 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:49.350260 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:49.367633 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:49.367661 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:49.436081 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:49.427382    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.428117    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.429891    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.430411    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.431982    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:49.427382    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.428117    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.429891    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.430411    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.431982    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:49.436102 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:49.436115 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:49.461438 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:49.461474 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:51.994161 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:52.005864 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:52.005962 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:52.032002 1542350 cri.go:89] found id: ""
	I1213 16:15:52.032027 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.032052 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:52.032059 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:52.032118 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:52.058529 1542350 cri.go:89] found id: ""
	I1213 16:15:52.058552 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.058561 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:52.058567 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:52.058627 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:52.085765 1542350 cri.go:89] found id: ""
	I1213 16:15:52.085787 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.085795 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:52.085802 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:52.085860 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:52.113317 1542350 cri.go:89] found id: ""
	I1213 16:15:52.113389 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.113411 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:52.113430 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:52.113512 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:52.144343 1542350 cri.go:89] found id: ""
	I1213 16:15:52.144364 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.144373 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:52.144379 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:52.144450 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:52.170804 1542350 cri.go:89] found id: ""
	I1213 16:15:52.170876 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.170899 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:52.170916 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:52.171015 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:52.200043 1542350 cri.go:89] found id: ""
	I1213 16:15:52.200114 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.200137 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:52.200155 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:52.200254 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:52.226948 1542350 cri.go:89] found id: ""
	I1213 16:15:52.227022 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.227057 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:52.227086 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:52.227120 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:52.282092 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:52.282131 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:52.298201 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:52.298227 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:52.381110 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:52.372691    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.373232    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.374717    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.375078    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.376590    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:52.372691    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.373232    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.374717    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.375078    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.376590    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:52.381134 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:52.381148 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:52.409962 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:52.409994 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:54.942176 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:54.952757 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:54.952836 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:54.977644 1542350 cri.go:89] found id: ""
	I1213 16:15:54.977669 1542350 logs.go:282] 0 containers: []
	W1213 16:15:54.977678 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:54.977684 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:54.977742 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:55.005694 1542350 cri.go:89] found id: ""
	I1213 16:15:55.005722 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.005732 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:55.005740 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:55.005814 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:55.038377 1542350 cri.go:89] found id: ""
	I1213 16:15:55.038411 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.038422 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:55.038428 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:55.038493 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:55.065383 1542350 cri.go:89] found id: ""
	I1213 16:15:55.065417 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.065426 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:55.065433 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:55.065493 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:55.099813 1542350 cri.go:89] found id: ""
	I1213 16:15:55.099841 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.099850 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:55.099856 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:55.099931 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:55.128346 1542350 cri.go:89] found id: ""
	I1213 16:15:55.128368 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.128380 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:55.128387 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:55.128456 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:55.160925 1542350 cri.go:89] found id: ""
	I1213 16:15:55.160957 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.160966 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:55.160973 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:55.161037 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:55.188105 1542350 cri.go:89] found id: ""
	I1213 16:15:55.188132 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.188141 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:55.188151 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:55.188164 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:55.218869 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:55.218893 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:55.274258 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:55.274294 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:55.290251 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:55.290280 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:55.359521 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:55.350886    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.351709    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.353380    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.353940    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.355564    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:55.350886    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.351709    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.353380    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.353940    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.355564    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:55.359543 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:55.359556 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:57.887804 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:57.898226 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:57.898297 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:57.922697 1542350 cri.go:89] found id: ""
	I1213 16:15:57.922723 1542350 logs.go:282] 0 containers: []
	W1213 16:15:57.922732 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:57.922740 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:57.922821 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:57.947431 1542350 cri.go:89] found id: ""
	I1213 16:15:57.947457 1542350 logs.go:282] 0 containers: []
	W1213 16:15:57.947467 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:57.947473 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:57.947532 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:57.971494 1542350 cri.go:89] found id: ""
	I1213 16:15:57.971557 1542350 logs.go:282] 0 containers: []
	W1213 16:15:57.971582 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:57.971601 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:57.971679 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:57.999470 1542350 cri.go:89] found id: ""
	I1213 16:15:57.999495 1542350 logs.go:282] 0 containers: []
	W1213 16:15:57.999504 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:57.999510 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:57.999572 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:58.028740 1542350 cri.go:89] found id: ""
	I1213 16:15:58.028767 1542350 logs.go:282] 0 containers: []
	W1213 16:15:58.028777 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:58.028783 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:58.028849 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:58.054022 1542350 cri.go:89] found id: ""
	I1213 16:15:58.054043 1542350 logs.go:282] 0 containers: []
	W1213 16:15:58.054053 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:58.054059 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:58.054121 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:58.096720 1542350 cri.go:89] found id: ""
	I1213 16:15:58.096749 1542350 logs.go:282] 0 containers: []
	W1213 16:15:58.096758 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:58.096765 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:58.096825 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:58.133084 1542350 cri.go:89] found id: ""
	I1213 16:15:58.133114 1542350 logs.go:282] 0 containers: []
	W1213 16:15:58.133123 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:58.133133 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:58.133144 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:58.198401 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:58.198437 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:58.216601 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:58.216683 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:58.288456 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:58.279720    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.280528    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.282071    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.282726    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.283743    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:58.279720    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.280528    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.282071    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.282726    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.283743    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:58.288523 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:58.288544 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:58.314432 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:58.314470 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:00.851874 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:00.862470 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:00.862540 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:00.886360 1542350 cri.go:89] found id: ""
	I1213 16:16:00.886384 1542350 logs.go:282] 0 containers: []
	W1213 16:16:00.886392 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:00.886398 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:00.886458 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:00.910826 1542350 cri.go:89] found id: ""
	I1213 16:16:00.910851 1542350 logs.go:282] 0 containers: []
	W1213 16:16:00.910861 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:00.910867 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:00.910925 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:00.935111 1542350 cri.go:89] found id: ""
	I1213 16:16:00.935141 1542350 logs.go:282] 0 containers: []
	W1213 16:16:00.935150 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:00.935156 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:00.935214 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:00.960959 1542350 cri.go:89] found id: ""
	I1213 16:16:00.960982 1542350 logs.go:282] 0 containers: []
	W1213 16:16:00.960991 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:00.960997 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:00.961057 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:00.985954 1542350 cri.go:89] found id: ""
	I1213 16:16:00.985977 1542350 logs.go:282] 0 containers: []
	W1213 16:16:00.985986 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:00.985991 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:00.986052 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:01.011865 1542350 cri.go:89] found id: ""
	I1213 16:16:01.011889 1542350 logs.go:282] 0 containers: []
	W1213 16:16:01.011897 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:01.011903 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:01.011966 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:01.041391 1542350 cri.go:89] found id: ""
	I1213 16:16:01.041412 1542350 logs.go:282] 0 containers: []
	W1213 16:16:01.041421 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:01.041427 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:01.041486 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:01.065980 1542350 cri.go:89] found id: ""
	I1213 16:16:01.066001 1542350 logs.go:282] 0 containers: []
	W1213 16:16:01.066010 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:01.066020 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:01.066035 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:01.125520 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:01.125602 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:01.143155 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:01.143228 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:01.224569 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:01.213708    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.214378    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.216451    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.217392    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.219480    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:01.213708    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.214378    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.216451    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.217392    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.219480    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:01.224588 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:01.224602 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:01.251006 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:01.251045 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:03.780250 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:03.794327 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:03.794399 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:03.819181 1542350 cri.go:89] found id: ""
	I1213 16:16:03.819209 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.819218 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:03.819224 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:03.819285 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:03.845225 1542350 cri.go:89] found id: ""
	I1213 16:16:03.845248 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.845257 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:03.845264 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:03.845324 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:03.873944 1542350 cri.go:89] found id: ""
	I1213 16:16:03.873966 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.873975 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:03.873981 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:03.874042 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:03.899655 1542350 cri.go:89] found id: ""
	I1213 16:16:03.899685 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.899694 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:03.899701 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:03.899763 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:03.927094 1542350 cri.go:89] found id: ""
	I1213 16:16:03.927122 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.927131 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:03.927137 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:03.927196 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:03.952240 1542350 cri.go:89] found id: ""
	I1213 16:16:03.952267 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.952276 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:03.952282 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:03.952340 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:03.976494 1542350 cri.go:89] found id: ""
	I1213 16:16:03.976520 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.976529 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:03.976535 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:03.976605 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:04.001277 1542350 cri.go:89] found id: ""
	I1213 16:16:04.001304 1542350 logs.go:282] 0 containers: []
	W1213 16:16:04.001313 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:04.001324 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:04.001339 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:04.061393 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:04.061428 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:04.078258 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:04.078290 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:04.162687 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:04.153708    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.154478    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.156333    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.156798    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.158424    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:04.153708    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.154478    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.156333    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.156798    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.158424    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:04.162710 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:04.162723 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:04.187844 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:04.187879 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:06.716865 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:06.727125 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:06.727193 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:06.752991 1542350 cri.go:89] found id: ""
	I1213 16:16:06.753015 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.753024 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:06.753030 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:06.753089 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:06.777092 1542350 cri.go:89] found id: ""
	I1213 16:16:06.777116 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.777125 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:06.777130 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:06.777188 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:06.805182 1542350 cri.go:89] found id: ""
	I1213 16:16:06.805256 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.805278 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:06.805292 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:06.805363 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:06.833454 1542350 cri.go:89] found id: ""
	I1213 16:16:06.833477 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.833486 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:06.833492 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:06.833553 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:06.864279 1542350 cri.go:89] found id: ""
	I1213 16:16:06.864303 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.864311 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:06.864318 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:06.864379 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:06.889879 1542350 cri.go:89] found id: ""
	I1213 16:16:06.889905 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.889914 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:06.889920 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:06.889980 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:06.913566 1542350 cri.go:89] found id: ""
	I1213 16:16:06.913600 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.913609 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:06.913615 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:06.913682 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:06.939090 1542350 cri.go:89] found id: ""
	I1213 16:16:06.939161 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.939199 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:06.939226 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:06.939253 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:06.994546 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:06.994587 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:07.012062 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:07.012099 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:07.079574 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:07.070333    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.070833    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.072777    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.073334    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.074988    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:07.070333    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.070833    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.072777    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.073334    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.074988    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:07.079597 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:07.079609 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:07.106688 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:07.106772 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:09.648446 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:09.659497 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:09.659572 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:09.685004 1542350 cri.go:89] found id: ""
	I1213 16:16:09.685031 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.685040 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:09.685047 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:09.685106 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:09.710322 1542350 cri.go:89] found id: ""
	I1213 16:16:09.710350 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.710359 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:09.710365 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:09.710424 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:09.736183 1542350 cri.go:89] found id: ""
	I1213 16:16:09.736209 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.736218 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:09.736225 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:09.736328 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:09.761808 1542350 cri.go:89] found id: ""
	I1213 16:16:09.761831 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.761839 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:09.761846 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:09.761907 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:09.788666 1542350 cri.go:89] found id: ""
	I1213 16:16:09.788690 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.788699 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:09.788705 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:09.788767 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:09.815565 1542350 cri.go:89] found id: ""
	I1213 16:16:09.815590 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.815598 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:09.815604 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:09.815663 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:09.841443 1542350 cri.go:89] found id: ""
	I1213 16:16:09.841466 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.841475 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:09.841481 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:09.841538 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:09.870775 1542350 cri.go:89] found id: ""
	I1213 16:16:09.870798 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.870806 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:09.870818 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:09.870829 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:09.927243 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:09.927279 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:09.944116 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:09.944150 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:10.018299 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:10.008262    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.009034    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.011237    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.011729    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.013703    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:10.008262    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.009034    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.011237    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.011729    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.013703    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:10.018334 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:10.018348 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:10.062337 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:10.062384 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:12.610748 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:12.622191 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:12.622266 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:12.654912 1542350 cri.go:89] found id: ""
	I1213 16:16:12.654939 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.654948 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:12.654955 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:12.655017 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:12.679878 1542350 cri.go:89] found id: ""
	I1213 16:16:12.679904 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.679913 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:12.679919 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:12.679981 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:12.708594 1542350 cri.go:89] found id: ""
	I1213 16:16:12.708619 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.708628 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:12.708641 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:12.708703 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:12.734832 1542350 cri.go:89] found id: ""
	I1213 16:16:12.734857 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.734866 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:12.734872 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:12.734931 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:12.760756 1542350 cri.go:89] found id: ""
	I1213 16:16:12.760784 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.760793 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:12.760799 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:12.760860 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:12.786434 1542350 cri.go:89] found id: ""
	I1213 16:16:12.786470 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.786479 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:12.786486 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:12.786558 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:12.810666 1542350 cri.go:89] found id: ""
	I1213 16:16:12.810699 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.810708 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:12.810714 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:12.810779 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:12.835161 1542350 cri.go:89] found id: ""
	I1213 16:16:12.835206 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.835216 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:12.835225 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:12.835238 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:12.851412 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:12.851438 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:12.919002 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:12.910126    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.910878    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.912652    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.913309    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.915168    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:12.910126    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.910878    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.912652    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.913309    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.915168    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:12.919032 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:12.919045 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:12.945016 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:12.945054 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:12.975303 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:12.975353 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:15.533437 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:15.545434 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:15.545514 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:15.570277 1542350 cri.go:89] found id: ""
	I1213 16:16:15.570303 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.570353 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:15.570362 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:15.570427 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:15.602983 1542350 cri.go:89] found id: ""
	I1213 16:16:15.603009 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.603017 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:15.603023 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:15.603082 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:15.631137 1542350 cri.go:89] found id: ""
	I1213 16:16:15.631172 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.631181 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:15.631187 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:15.631245 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:15.664783 1542350 cri.go:89] found id: ""
	I1213 16:16:15.664810 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.664819 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:15.664825 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:15.664886 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:15.691237 1542350 cri.go:89] found id: ""
	I1213 16:16:15.691264 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.691274 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:15.691280 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:15.691368 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:15.715449 1542350 cri.go:89] found id: ""
	I1213 16:16:15.715473 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.715482 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:15.715489 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:15.715553 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:15.740667 1542350 cri.go:89] found id: ""
	I1213 16:16:15.740692 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.740701 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:15.740707 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:15.740770 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:15.765160 1542350 cri.go:89] found id: ""
	I1213 16:16:15.765182 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.765191 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:15.765200 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:15.765212 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:15.820427 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:15.820466 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:15.836513 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:15.836541 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:15.903389 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:15.894908    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.895741    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.897308    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.897848    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.899415    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:15.894908    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.895741    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.897308    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.897848    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.899415    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:15.903412 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:15.903427 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:15.928787 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:15.928825 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:18.458780 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:18.469268 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:18.469341 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:18.497781 1542350 cri.go:89] found id: ""
	I1213 16:16:18.497811 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.497824 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:18.497831 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:18.497918 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:18.522772 1542350 cri.go:89] found id: ""
	I1213 16:16:18.522799 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.522808 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:18.522815 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:18.522874 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:18.549419 1542350 cri.go:89] found id: ""
	I1213 16:16:18.549443 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.549452 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:18.549457 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:18.549524 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:18.573853 1542350 cri.go:89] found id: ""
	I1213 16:16:18.573881 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.573889 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:18.573896 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:18.573960 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:18.604140 1542350 cri.go:89] found id: ""
	I1213 16:16:18.604167 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.604188 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:18.604194 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:18.604264 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:18.637649 1542350 cri.go:89] found id: ""
	I1213 16:16:18.637677 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.637686 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:18.637692 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:18.637752 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:18.668019 1542350 cri.go:89] found id: ""
	I1213 16:16:18.668045 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.668053 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:18.668059 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:18.668120 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:18.694456 1542350 cri.go:89] found id: ""
	I1213 16:16:18.694482 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.694493 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:18.694503 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:18.694515 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:18.722967 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:18.722995 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:18.780808 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:18.780844 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:18.797393 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:18.797421 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:18.866061 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:18.858113    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.858623    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.860137    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.860610    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.862072    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:18.858113    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.858623    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.860137    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.860610    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.862072    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:18.866083 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:18.866096 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:21.391436 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:21.403266 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:21.403363 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:21.429372 1542350 cri.go:89] found id: ""
	I1213 16:16:21.429405 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.429415 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:21.429420 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:21.429479 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:21.454218 1542350 cri.go:89] found id: ""
	I1213 16:16:21.454287 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.454311 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:21.454329 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:21.454420 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:21.478016 1542350 cri.go:89] found id: ""
	I1213 16:16:21.478041 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.478049 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:21.478055 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:21.478112 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:21.504574 1542350 cri.go:89] found id: ""
	I1213 16:16:21.504612 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.504622 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:21.504629 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:21.504692 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:21.531727 1542350 cri.go:89] found id: ""
	I1213 16:16:21.531761 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.531770 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:21.531777 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:21.531836 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:21.556964 1542350 cri.go:89] found id: ""
	I1213 16:16:21.556999 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.557010 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:21.557018 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:21.557077 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:21.592445 1542350 cri.go:89] found id: ""
	I1213 16:16:21.592509 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.592533 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:21.592550 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:21.592645 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:21.620898 1542350 cri.go:89] found id: ""
	I1213 16:16:21.620920 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.620928 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:21.620937 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:21.620949 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:21.682810 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:21.682846 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:21.699275 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:21.699375 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:21.766336 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:21.758580    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.759107    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.760601    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.761028    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.762478    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:21.758580    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.759107    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.760601    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.761028    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.762478    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:21.766397 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:21.766426 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:21.791266 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:21.791300 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:24.319481 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:24.330216 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:24.330310 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:24.369003 1542350 cri.go:89] found id: ""
	I1213 16:16:24.369033 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.369041 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:24.369047 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:24.369106 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:24.396473 1542350 cri.go:89] found id: ""
	I1213 16:16:24.396502 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.396511 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:24.396516 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:24.396580 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:24.436915 1542350 cri.go:89] found id: ""
	I1213 16:16:24.436939 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.436948 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:24.436953 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:24.437013 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:24.465118 1542350 cri.go:89] found id: ""
	I1213 16:16:24.465139 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.465147 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:24.465153 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:24.465211 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:24.490097 1542350 cri.go:89] found id: ""
	I1213 16:16:24.490121 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.490130 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:24.490136 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:24.490196 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:24.520031 1542350 cri.go:89] found id: ""
	I1213 16:16:24.520096 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.520120 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:24.520141 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:24.520214 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:24.545891 1542350 cri.go:89] found id: ""
	I1213 16:16:24.545919 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.545928 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:24.545933 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:24.546014 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:24.574276 1542350 cri.go:89] found id: ""
	I1213 16:16:24.574313 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.574323 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:24.574353 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:24.574387 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:24.611068 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:24.611145 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:24.677764 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:24.677808 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:24.696759 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:24.696802 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:24.773564 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:24.765018    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.765700    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.767375    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.767981    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.769749    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:24.765018    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.765700    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.767375    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.767981    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.769749    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:24.773586 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:24.773598 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:27.299826 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:27.310825 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:27.310902 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:27.341771 1542350 cri.go:89] found id: ""
	I1213 16:16:27.341794 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.341803 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:27.341810 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:27.341876 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:27.369884 1542350 cri.go:89] found id: ""
	I1213 16:16:27.369908 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.369917 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:27.369923 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:27.369988 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:27.402575 1542350 cri.go:89] found id: ""
	I1213 16:16:27.402598 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.402606 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:27.402612 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:27.402680 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:27.429116 1542350 cri.go:89] found id: ""
	I1213 16:16:27.429157 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.429169 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:27.429176 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:27.429245 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:27.456147 1542350 cri.go:89] found id: ""
	I1213 16:16:27.456174 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.456183 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:27.456191 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:27.456254 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:27.481262 1542350 cri.go:89] found id: ""
	I1213 16:16:27.481288 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.481297 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:27.481304 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:27.481370 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:27.507140 1542350 cri.go:89] found id: ""
	I1213 16:16:27.507169 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.507179 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:27.507185 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:27.507269 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:27.532060 1542350 cri.go:89] found id: ""
	I1213 16:16:27.532139 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.532162 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:27.532180 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:27.532193 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:27.588083 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:27.588123 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:27.605875 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:27.605906 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:27.677799 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:27.667405    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.668242    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.669729    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.670202    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.673720    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:27.667405    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.668242    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.669729    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.670202    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.673720    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:27.677822 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:27.677834 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:27.703668 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:27.703704 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:30.232616 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:30.244334 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:30.244408 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:30.269730 1542350 cri.go:89] found id: ""
	I1213 16:16:30.269757 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.269765 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:30.269771 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:30.269830 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:30.296665 1542350 cri.go:89] found id: ""
	I1213 16:16:30.296693 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.296702 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:30.296709 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:30.296832 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:30.322172 1542350 cri.go:89] found id: ""
	I1213 16:16:30.322251 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.322276 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:30.322296 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:30.322405 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:30.364083 1542350 cri.go:89] found id: ""
	I1213 16:16:30.364113 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.364125 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:30.364138 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:30.364206 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:30.405727 1542350 cri.go:89] found id: ""
	I1213 16:16:30.405751 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.405759 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:30.405765 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:30.405825 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:30.432819 1542350 cri.go:89] found id: ""
	I1213 16:16:30.432846 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.432855 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:30.432862 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:30.432921 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:30.458202 1542350 cri.go:89] found id: ""
	I1213 16:16:30.458228 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.458237 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:30.458243 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:30.458310 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:30.482950 1542350 cri.go:89] found id: ""
	I1213 16:16:30.482977 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.482987 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:30.482996 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:30.483008 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:30.507886 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:30.507921 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:30.538090 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:30.538159 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:30.593644 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:30.593729 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:30.610246 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:30.610272 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:30.684359 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:30.676539    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.677025    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.678556    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.678874    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.680436    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:30.676539    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.677025    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.678556    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.678874    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.680436    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:33.184602 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:33.195455 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:33.195556 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:33.225437 1542350 cri.go:89] found id: ""
	I1213 16:16:33.225459 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.225468 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:33.225474 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:33.225541 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:33.250024 1542350 cri.go:89] found id: ""
	I1213 16:16:33.250089 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.250113 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:33.250131 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:33.250218 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:33.275721 1542350 cri.go:89] found id: ""
	I1213 16:16:33.275747 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.275755 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:33.275762 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:33.275823 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:33.300346 1542350 cri.go:89] found id: ""
	I1213 16:16:33.300368 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.300377 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:33.300383 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:33.300442 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:33.324866 1542350 cri.go:89] found id: ""
	I1213 16:16:33.324889 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.324897 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:33.324904 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:33.324963 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:33.354142 1542350 cri.go:89] found id: ""
	I1213 16:16:33.354216 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.354239 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:33.354257 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:33.354347 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:33.388195 1542350 cri.go:89] found id: ""
	I1213 16:16:33.388216 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.388224 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:33.388230 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:33.388286 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:33.416283 1542350 cri.go:89] found id: ""
	I1213 16:16:33.416306 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.416314 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:33.416325 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:33.416337 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:33.432175 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:33.432206 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:33.499040 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:33.490874    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.491494    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.493033    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.493510    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.495051    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:33.490874    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.491494    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.493033    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.493510    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.495051    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:33.499062 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:33.499074 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:33.524925 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:33.524958 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:33.554998 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:33.555026 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:36.110953 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:36.121861 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:36.121930 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:36.146369 1542350 cri.go:89] found id: ""
	I1213 16:16:36.146429 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.146450 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:36.146476 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:36.146557 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:36.171595 1542350 cri.go:89] found id: ""
	I1213 16:16:36.171617 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.171625 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:36.171631 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:36.171693 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:36.196869 1542350 cri.go:89] found id: ""
	I1213 16:16:36.196891 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.196900 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:36.196906 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:36.196963 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:36.221290 1542350 cri.go:89] found id: ""
	I1213 16:16:36.221317 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.221326 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:36.221338 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:36.221400 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:36.246254 1542350 cri.go:89] found id: ""
	I1213 16:16:36.246280 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.246289 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:36.246294 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:36.246352 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:36.276463 1542350 cri.go:89] found id: ""
	I1213 16:16:36.276486 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.276494 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:36.276500 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:36.276565 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:36.302414 1542350 cri.go:89] found id: ""
	I1213 16:16:36.302446 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.302454 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:36.302460 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:36.302530 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:36.327676 1542350 cri.go:89] found id: ""
	I1213 16:16:36.327753 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.327770 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:36.327781 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:36.327793 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:36.347589 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:36.347658 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:36.422910 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:36.414535    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.415436    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.417255    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.417652    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.419100    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:36.414535    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.415436    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.417255    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.417652    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.419100    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:36.422940 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:36.422968 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:36.449077 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:36.449114 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:36.476904 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:36.476935 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:39.032927 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:39.043398 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:39.043466 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:39.068941 1542350 cri.go:89] found id: ""
	I1213 16:16:39.068968 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.068977 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:39.068983 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:39.069040 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:39.094525 1542350 cri.go:89] found id: ""
	I1213 16:16:39.094548 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.094557 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:39.094564 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:39.094626 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:39.118854 1542350 cri.go:89] found id: ""
	I1213 16:16:39.118875 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.118884 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:39.118890 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:39.118946 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:39.147615 1542350 cri.go:89] found id: ""
	I1213 16:16:39.147642 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.147651 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:39.147657 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:39.147719 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:39.173015 1542350 cri.go:89] found id: ""
	I1213 16:16:39.173038 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.173047 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:39.173053 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:39.173121 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:39.198427 1542350 cri.go:89] found id: ""
	I1213 16:16:39.198453 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.198462 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:39.198468 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:39.198525 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:39.223491 1542350 cri.go:89] found id: ""
	I1213 16:16:39.223514 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.223522 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:39.223528 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:39.223587 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:39.254117 1542350 cri.go:89] found id: ""
	I1213 16:16:39.254148 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.254157 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:39.254166 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:39.254178 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:39.313667 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:39.313706 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:39.331137 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:39.331215 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:39.414971 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:39.406302    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.407133    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.408880    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.409415    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.411052    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:39.406302    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.407133    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.408880    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.409415    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.411052    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:39.414990 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:39.415003 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:39.440561 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:39.440604 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:41.973087 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:41.983385 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:41.983456 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:42.010547 1542350 cri.go:89] found id: ""
	I1213 16:16:42.010644 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.010658 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:42.010666 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:42.010780 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:42.041355 1542350 cri.go:89] found id: ""
	I1213 16:16:42.041379 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.041388 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:42.041394 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:42.041462 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:42.074781 1542350 cri.go:89] found id: ""
	I1213 16:16:42.074808 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.074818 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:42.074825 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:42.074895 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:42.105943 1542350 cri.go:89] found id: ""
	I1213 16:16:42.105972 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.105980 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:42.105987 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:42.106062 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:42.144036 1542350 cri.go:89] found id: ""
	I1213 16:16:42.144062 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.144070 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:42.144077 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:42.144144 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:42.177438 1542350 cri.go:89] found id: ""
	I1213 16:16:42.177464 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.177474 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:42.177482 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:42.177555 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:42.209616 1542350 cri.go:89] found id: ""
	I1213 16:16:42.209644 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.209653 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:42.209662 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:42.209730 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:42.240251 1542350 cri.go:89] found id: ""
	I1213 16:16:42.240283 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.240293 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:42.240303 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:42.240317 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:42.274974 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:42.275008 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:42.333409 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:42.333488 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:42.353909 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:42.353998 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:42.431547 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:42.422852    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.423662    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.425409    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.425733    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.427251    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:42.422852    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.423662    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.425409    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.425733    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.427251    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:42.431570 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:42.431582 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:44.957982 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:44.968708 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:44.968778 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:44.998179 1542350 cri.go:89] found id: ""
	I1213 16:16:44.998205 1542350 logs.go:282] 0 containers: []
	W1213 16:16:44.998214 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:44.998220 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:44.998281 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:45.055672 1542350 cri.go:89] found id: ""
	I1213 16:16:45.055695 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.055705 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:45.055712 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:45.055785 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:45.112504 1542350 cri.go:89] found id: ""
	I1213 16:16:45.112598 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.112625 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:45.112646 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:45.112821 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:45.148966 1542350 cri.go:89] found id: ""
	I1213 16:16:45.148993 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.149002 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:45.149008 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:45.149081 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:45.215276 1542350 cri.go:89] found id: ""
	I1213 16:16:45.215383 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.215547 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:45.215573 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:45.215685 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:45.266343 1542350 cri.go:89] found id: ""
	I1213 16:16:45.266422 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.266448 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:45.266469 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:45.266569 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:45.311801 1542350 cri.go:89] found id: ""
	I1213 16:16:45.311877 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.311905 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:45.311925 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:45.312039 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:45.345856 1542350 cri.go:89] found id: ""
	I1213 16:16:45.345884 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.345894 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:45.345904 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:45.345928 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:45.416309 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:45.416392 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:45.433509 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:45.433593 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:45.504820 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:45.495815    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.496673    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.498435    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.498745    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.500220    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:45.495815    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.496673    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.498435    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.498745    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.500220    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:45.504841 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:45.504855 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:45.530797 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:45.530836 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:48.061294 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:48.072582 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:48.072653 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:48.101139 1542350 cri.go:89] found id: ""
	I1213 16:16:48.101164 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.101173 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:48.101179 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:48.101250 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:48.127077 1542350 cri.go:89] found id: ""
	I1213 16:16:48.127100 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.127109 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:48.127115 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:48.127179 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:48.152708 1542350 cri.go:89] found id: ""
	I1213 16:16:48.152731 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.152740 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:48.152746 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:48.152806 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:48.183194 1542350 cri.go:89] found id: ""
	I1213 16:16:48.183220 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.183228 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:48.183235 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:48.183295 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:48.208544 1542350 cri.go:89] found id: ""
	I1213 16:16:48.208612 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.208638 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:48.208658 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:48.208773 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:48.234599 1542350 cri.go:89] found id: ""
	I1213 16:16:48.234633 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.234642 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:48.234667 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:48.234745 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:48.259586 1542350 cri.go:89] found id: ""
	I1213 16:16:48.259614 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.259623 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:48.259629 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:48.259712 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:48.283477 1542350 cri.go:89] found id: ""
	I1213 16:16:48.283499 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.283509 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:48.283542 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:48.283561 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:48.339116 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:48.339190 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:48.360686 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:48.360767 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:48.433619 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:48.425212    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.425811    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.427513    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.427900    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.429452    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:48.425212    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.425811    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.427513    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.427900    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.429452    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:48.433643 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:48.433655 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:48.458793 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:48.458837 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:50.988521 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:50.999862 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:50.999930 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:51.029019 1542350 cri.go:89] found id: ""
	I1213 16:16:51.029045 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.029054 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:51.029060 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:51.029132 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:51.058195 1542350 cri.go:89] found id: ""
	I1213 16:16:51.058222 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.058231 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:51.058237 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:51.058297 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:51.083486 1542350 cri.go:89] found id: ""
	I1213 16:16:51.083512 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.083521 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:51.083527 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:51.083589 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:51.108698 1542350 cri.go:89] found id: ""
	I1213 16:16:51.108723 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.108733 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:51.108739 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:51.108801 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:51.133979 1542350 cri.go:89] found id: ""
	I1213 16:16:51.134003 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.134011 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:51.134017 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:51.134074 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:51.161527 1542350 cri.go:89] found id: ""
	I1213 16:16:51.161552 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.161562 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:51.161568 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:51.161627 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:51.186814 1542350 cri.go:89] found id: ""
	I1213 16:16:51.186841 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.186850 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:51.186856 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:51.186916 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:51.216180 1542350 cri.go:89] found id: ""
	I1213 16:16:51.216212 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.216221 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:51.216230 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:51.216245 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:51.273877 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:51.273919 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:51.291469 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:51.291502 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:51.365379 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:51.356884    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.357610    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.359231    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.359765    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.361313    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:51.356884    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.357610    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.359231    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.359765    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.361313    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:51.365447 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:51.365471 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:51.393925 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:51.393997 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:53.927124 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:53.937787 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:53.937865 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:53.965198 1542350 cri.go:89] found id: ""
	I1213 16:16:53.965222 1542350 logs.go:282] 0 containers: []
	W1213 16:16:53.965230 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:53.965236 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:53.965295 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:53.990127 1542350 cri.go:89] found id: ""
	I1213 16:16:53.990153 1542350 logs.go:282] 0 containers: []
	W1213 16:16:53.990162 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:53.990168 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:53.990227 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:54.017573 1542350 cri.go:89] found id: ""
	I1213 16:16:54.017600 1542350 logs.go:282] 0 containers: []
	W1213 16:16:54.017610 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:54.017627 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:54.017691 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:54.042201 1542350 cri.go:89] found id: ""
	I1213 16:16:54.042223 1542350 logs.go:282] 0 containers: []
	W1213 16:16:54.042232 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:54.042239 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:54.042297 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:54.069040 1542350 cri.go:89] found id: ""
	I1213 16:16:54.069064 1542350 logs.go:282] 0 containers: []
	W1213 16:16:54.069072 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:54.069079 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:54.069139 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:54.094593 1542350 cri.go:89] found id: ""
	I1213 16:16:54.094614 1542350 logs.go:282] 0 containers: []
	W1213 16:16:54.094624 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:54.094630 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:54.094692 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:54.118976 1542350 cri.go:89] found id: ""
	I1213 16:16:54.119047 1542350 logs.go:282] 0 containers: []
	W1213 16:16:54.119070 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:54.119088 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:54.119162 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:54.145323 1542350 cri.go:89] found id: ""
	I1213 16:16:54.145346 1542350 logs.go:282] 0 containers: []
	W1213 16:16:54.145355 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:54.145364 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:54.145375 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:54.170838 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:54.170873 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:54.198725 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:54.198752 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:54.253610 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:54.253646 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:54.272399 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:54.272428 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:54.360945 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:54.351253    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.352548    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.354388    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.355099    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.356700    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:54.351253    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.352548    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.354388    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.355099    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.356700    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:56.861910 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:56.873998 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:56.874110 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:56.904398 1542350 cri.go:89] found id: ""
	I1213 16:16:56.904423 1542350 logs.go:282] 0 containers: []
	W1213 16:16:56.904432 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:56.904438 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:56.904498 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:56.928756 1542350 cri.go:89] found id: ""
	I1213 16:16:56.928783 1542350 logs.go:282] 0 containers: []
	W1213 16:16:56.928792 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:56.928798 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:56.928856 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:56.952449 1542350 cri.go:89] found id: ""
	I1213 16:16:56.952473 1542350 logs.go:282] 0 containers: []
	W1213 16:16:56.952481 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:56.952487 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:56.952544 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:56.976949 1542350 cri.go:89] found id: ""
	I1213 16:16:56.976973 1542350 logs.go:282] 0 containers: []
	W1213 16:16:56.976981 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:56.976988 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:56.977074 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:57.001996 1542350 cri.go:89] found id: ""
	I1213 16:16:57.002023 1542350 logs.go:282] 0 containers: []
	W1213 16:16:57.002032 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:57.002039 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:57.002107 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:57.033494 1542350 cri.go:89] found id: ""
	I1213 16:16:57.033519 1542350 logs.go:282] 0 containers: []
	W1213 16:16:57.033527 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:57.033533 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:57.033590 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:57.057055 1542350 cri.go:89] found id: ""
	I1213 16:16:57.057082 1542350 logs.go:282] 0 containers: []
	W1213 16:16:57.057090 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:57.057096 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:57.057153 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:57.086023 1542350 cri.go:89] found id: ""
	I1213 16:16:57.086047 1542350 logs.go:282] 0 containers: []
	W1213 16:16:57.086057 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:57.086066 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:57.086078 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:57.140604 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:57.140639 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:57.156471 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:57.156501 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:57.226365 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:57.217689   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.218500   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.220107   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.220657   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.222574   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:57.217689   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.218500   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.220107   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.220657   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.222574   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:57.226409 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:57.226425 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:57.251875 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:57.251911 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:59.781524 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:59.792544 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:59.792620 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:59.817081 1542350 cri.go:89] found id: ""
	I1213 16:16:59.817108 1542350 logs.go:282] 0 containers: []
	W1213 16:16:59.817123 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:59.817130 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:59.817197 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:59.854425 1542350 cri.go:89] found id: ""
	I1213 16:16:59.854453 1542350 logs.go:282] 0 containers: []
	W1213 16:16:59.854463 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:59.854469 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:59.854529 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:59.891724 1542350 cri.go:89] found id: ""
	I1213 16:16:59.891750 1542350 logs.go:282] 0 containers: []
	W1213 16:16:59.891759 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:59.891766 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:59.891826 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:59.921656 1542350 cri.go:89] found id: ""
	I1213 16:16:59.921682 1542350 logs.go:282] 0 containers: []
	W1213 16:16:59.921691 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:59.921697 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:59.921757 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:59.946905 1542350 cri.go:89] found id: ""
	I1213 16:16:59.946930 1542350 logs.go:282] 0 containers: []
	W1213 16:16:59.946943 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:59.946949 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:59.947011 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:59.974061 1542350 cri.go:89] found id: ""
	I1213 16:16:59.974087 1542350 logs.go:282] 0 containers: []
	W1213 16:16:59.974096 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:59.974103 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:59.974181 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:00.003912 1542350 cri.go:89] found id: ""
	I1213 16:17:00.003945 1542350 logs.go:282] 0 containers: []
	W1213 16:17:00.003955 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:00.003962 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:00.004041 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:00.129167 1542350 cri.go:89] found id: ""
	I1213 16:17:00.129242 1542350 logs.go:282] 0 containers: []
	W1213 16:17:00.129267 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:00.129291 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:00.129321 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:00.325276 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:00.316397   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.317316   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.318748   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.319511   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.320676   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:00.316397   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.317316   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.318748   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.319511   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.320676   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:00.325303 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:00.325317 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:00.357630 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:00.357684 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:00.417887 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:00.417929 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:00.512817 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:00.512861 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:03.034231 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:03.045928 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:03.046041 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:03.073150 1542350 cri.go:89] found id: ""
	I1213 16:17:03.073178 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.073187 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:03.073194 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:03.073257 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:03.100010 1542350 cri.go:89] found id: ""
	I1213 16:17:03.100036 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.100046 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:03.100052 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:03.100118 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:03.126901 1542350 cri.go:89] found id: ""
	I1213 16:17:03.126929 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.126938 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:03.126944 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:03.127007 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:03.158512 1542350 cri.go:89] found id: ""
	I1213 16:17:03.158538 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.158547 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:03.158554 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:03.158623 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:03.186730 1542350 cri.go:89] found id: ""
	I1213 16:17:03.186757 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.186766 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:03.186773 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:03.186843 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:03.213877 1542350 cri.go:89] found id: ""
	I1213 16:17:03.213913 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.213922 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:03.213929 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:03.214000 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:03.244284 1542350 cri.go:89] found id: ""
	I1213 16:17:03.244360 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.244382 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:03.244401 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:03.244496 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:03.272102 1542350 cri.go:89] found id: ""
	I1213 16:17:03.272193 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.272210 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:03.272221 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:03.272234 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:03.330001 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:03.330036 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:03.347681 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:03.347716 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:03.430544 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:03.421427   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.422228   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.424046   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.424538   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.426050   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:03.421427   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.422228   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.424046   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.424538   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.426050   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:03.430566 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:03.430581 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:03.457512 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:03.457552 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:05.988326 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:06.000598 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:06.000678 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:06.036782 1542350 cri.go:89] found id: ""
	I1213 16:17:06.036859 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.036876 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:06.036891 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:06.036960 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:06.066595 1542350 cri.go:89] found id: ""
	I1213 16:17:06.066623 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.066633 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:06.066640 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:06.066705 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:06.095017 1542350 cri.go:89] found id: ""
	I1213 16:17:06.095047 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.095057 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:06.095064 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:06.095146 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:06.123113 1542350 cri.go:89] found id: ""
	I1213 16:17:06.123140 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.123150 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:06.123156 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:06.123223 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:06.150821 1542350 cri.go:89] found id: ""
	I1213 16:17:06.150847 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.150856 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:06.150862 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:06.150925 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:06.176578 1542350 cri.go:89] found id: ""
	I1213 16:17:06.176608 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.176616 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:06.176623 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:06.176690 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:06.207351 1542350 cri.go:89] found id: ""
	I1213 16:17:06.207387 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.207397 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:06.207404 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:06.207468 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:06.233849 1542350 cri.go:89] found id: ""
	I1213 16:17:06.233872 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.233881 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:06.233890 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:06.233907 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:06.250685 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:06.250716 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:06.319519 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:06.310314   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.310885   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.312626   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.313247   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.315150   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:06.310314   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.310885   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.312626   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.313247   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.315150   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:06.319544 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:06.319566 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:06.346128 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:06.346163 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:06.386358 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:06.386439 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:08.950033 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:08.960761 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:08.960908 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:08.984689 1542350 cri.go:89] found id: ""
	I1213 16:17:08.984727 1542350 logs.go:282] 0 containers: []
	W1213 16:17:08.984737 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:08.984760 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:08.984839 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:09.014786 1542350 cri.go:89] found id: ""
	I1213 16:17:09.014811 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.014820 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:09.014826 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:09.014890 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:09.044222 1542350 cri.go:89] found id: ""
	I1213 16:17:09.044257 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.044267 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:09.044276 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:09.044344 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:09.077612 1542350 cri.go:89] found id: ""
	I1213 16:17:09.077685 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.077708 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:09.077726 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:09.077815 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:09.105512 1542350 cri.go:89] found id: ""
	I1213 16:17:09.105535 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.105545 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:09.105551 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:09.105617 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:09.129780 1542350 cri.go:89] found id: ""
	I1213 16:17:09.129803 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.129811 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:09.129817 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:09.129878 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:09.154967 1542350 cri.go:89] found id: ""
	I1213 16:17:09.154993 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.155002 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:09.155009 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:09.155076 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:09.179699 1542350 cri.go:89] found id: ""
	I1213 16:17:09.179763 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.179789 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:09.179806 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:09.179817 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:09.235549 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:09.235580 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:09.251403 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:09.251431 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:09.319531 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:09.311333   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.311986   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.313546   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.314046   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.315495   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:09.311333   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.311986   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.313546   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.314046   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.315495   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:09.319549 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:09.319561 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:09.346608 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:09.346650 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:11.878089 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:11.889358 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:11.889432 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:11.915293 1542350 cri.go:89] found id: ""
	I1213 16:17:11.915330 1542350 logs.go:282] 0 containers: []
	W1213 16:17:11.915339 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:11.915346 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:11.915408 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:11.945256 1542350 cri.go:89] found id: ""
	I1213 16:17:11.945334 1542350 logs.go:282] 0 containers: []
	W1213 16:17:11.945359 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:11.945374 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:11.945452 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:11.969767 1542350 cri.go:89] found id: ""
	I1213 16:17:11.969794 1542350 logs.go:282] 0 containers: []
	W1213 16:17:11.969803 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:11.969809 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:11.969871 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:11.993969 1542350 cri.go:89] found id: ""
	I1213 16:17:11.993996 1542350 logs.go:282] 0 containers: []
	W1213 16:17:11.994005 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:11.994011 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:11.994089 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:12.029493 1542350 cri.go:89] found id: ""
	I1213 16:17:12.029521 1542350 logs.go:282] 0 containers: []
	W1213 16:17:12.029531 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:12.029543 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:12.029608 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:12.059180 1542350 cri.go:89] found id: ""
	I1213 16:17:12.059208 1542350 logs.go:282] 0 containers: []
	W1213 16:17:12.059217 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:12.059223 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:12.059283 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:12.087232 1542350 cri.go:89] found id: ""
	I1213 16:17:12.087261 1542350 logs.go:282] 0 containers: []
	W1213 16:17:12.087270 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:12.087276 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:12.087371 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:12.112813 1542350 cri.go:89] found id: ""
	I1213 16:17:12.112835 1542350 logs.go:282] 0 containers: []
	W1213 16:17:12.112844 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:12.112853 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:12.112864 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:12.138376 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:12.138408 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:12.166357 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:12.166387 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:12.222375 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:12.222410 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:12.239215 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:12.239247 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:12.308445 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:12.300806   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.301332   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.302968   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.303390   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.304404   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:12.300806   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.301332   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.302968   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.303390   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.304404   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:14.808692 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:14.819373 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:14.819444 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:14.852674 1542350 cri.go:89] found id: ""
	I1213 16:17:14.852703 1542350 logs.go:282] 0 containers: []
	W1213 16:17:14.852712 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:14.852728 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:14.852788 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:14.883668 1542350 cri.go:89] found id: ""
	I1213 16:17:14.883695 1542350 logs.go:282] 0 containers: []
	W1213 16:17:14.883704 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:14.883710 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:14.883767 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:14.911607 1542350 cri.go:89] found id: ""
	I1213 16:17:14.911630 1542350 logs.go:282] 0 containers: []
	W1213 16:17:14.911638 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:14.911644 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:14.911706 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:14.936933 1542350 cri.go:89] found id: ""
	I1213 16:17:14.936960 1542350 logs.go:282] 0 containers: []
	W1213 16:17:14.936970 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:14.936977 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:14.937035 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:14.962547 1542350 cri.go:89] found id: ""
	I1213 16:17:14.962570 1542350 logs.go:282] 0 containers: []
	W1213 16:17:14.962580 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:14.962586 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:14.962689 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:14.986795 1542350 cri.go:89] found id: ""
	I1213 16:17:14.986820 1542350 logs.go:282] 0 containers: []
	W1213 16:17:14.986836 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:14.986843 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:14.986903 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:15.033107 1542350 cri.go:89] found id: ""
	I1213 16:17:15.033185 1542350 logs.go:282] 0 containers: []
	W1213 16:17:15.033224 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:15.033257 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:15.033365 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:15.061981 1542350 cri.go:89] found id: ""
	I1213 16:17:15.062060 1542350 logs.go:282] 0 containers: []
	W1213 16:17:15.062093 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:15.062116 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:15.062143 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:15.118734 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:15.118772 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:15.135655 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:15.135685 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:15.203637 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:15.195493   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.196273   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.197861   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.198155   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.199758   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:15.195493   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.196273   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.197861   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.198155   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.199758   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:15.203658 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:15.203670 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:15.229691 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:15.229730 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:17.757141 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:17.767810 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:17.767883 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:17.795906 1542350 cri.go:89] found id: ""
	I1213 16:17:17.795930 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.795939 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:17.795945 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:17.796011 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:17.820499 1542350 cri.go:89] found id: ""
	I1213 16:17:17.820525 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.820534 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:17.820540 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:17.820597 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:17.852893 1542350 cri.go:89] found id: ""
	I1213 16:17:17.852922 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.852931 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:17.852936 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:17.852998 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:17.882522 1542350 cri.go:89] found id: ""
	I1213 16:17:17.882550 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.882559 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:17.882567 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:17.882625 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:17.910091 1542350 cri.go:89] found id: ""
	I1213 16:17:17.910119 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.910128 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:17.910133 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:17.910194 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:17.934842 1542350 cri.go:89] found id: ""
	I1213 16:17:17.934877 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.934886 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:17.934892 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:17.934957 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:17.959436 1542350 cri.go:89] found id: ""
	I1213 16:17:17.959470 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.959480 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:17.959491 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:17.959563 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:17.984392 1542350 cri.go:89] found id: ""
	I1213 16:17:17.984422 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.984431 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:17.984440 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:17.984452 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:18.039527 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:18.039566 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:18.055611 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:18.055637 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:18.119895 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:18.111397   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.112071   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.113661   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.114180   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.115834   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:18.111397   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.112071   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.113661   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.114180   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.115834   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:18.119920 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:18.119935 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:18.145247 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:18.145282 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:20.679491 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:20.690101 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:20.690172 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:20.715727 1542350 cri.go:89] found id: ""
	I1213 16:17:20.715753 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.715770 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:20.715780 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:20.715849 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:20.743470 1542350 cri.go:89] found id: ""
	I1213 16:17:20.743496 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.743504 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:20.743511 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:20.743570 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:20.768457 1542350 cri.go:89] found id: ""
	I1213 16:17:20.768480 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.768496 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:20.768503 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:20.768561 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:20.792618 1542350 cri.go:89] found id: ""
	I1213 16:17:20.792644 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.792653 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:20.792660 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:20.792718 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:20.817055 1542350 cri.go:89] found id: ""
	I1213 16:17:20.817077 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.817087 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:20.817093 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:20.817155 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:20.847328 1542350 cri.go:89] found id: ""
	I1213 16:17:20.847351 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.847360 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:20.847366 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:20.847428 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:20.885859 1542350 cri.go:89] found id: ""
	I1213 16:17:20.885882 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.885891 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:20.885898 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:20.885956 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:20.915753 1542350 cri.go:89] found id: ""
	I1213 16:17:20.915784 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.915794 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:20.915803 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:20.915815 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:20.970894 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:20.970934 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:20.986885 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:20.986910 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:21.055027 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:21.046462   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.046998   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.048753   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.049334   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.051047   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:21.046462   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.046998   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.048753   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.049334   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.051047   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:21.055049 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:21.055062 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:21.079833 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:21.079866 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:23.608166 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:23.619347 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:23.619414 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:23.649699 1542350 cri.go:89] found id: ""
	I1213 16:17:23.649721 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.649729 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:23.649736 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:23.649795 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:23.675224 1542350 cri.go:89] found id: ""
	I1213 16:17:23.675246 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.675255 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:23.675261 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:23.675349 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:23.700895 1542350 cri.go:89] found id: ""
	I1213 16:17:23.700918 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.700927 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:23.700933 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:23.700996 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:23.729110 1542350 cri.go:89] found id: ""
	I1213 16:17:23.729176 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.729191 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:23.729198 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:23.729257 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:23.753661 1542350 cri.go:89] found id: ""
	I1213 16:17:23.753688 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.753697 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:23.753703 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:23.753774 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:23.778169 1542350 cri.go:89] found id: ""
	I1213 16:17:23.778217 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.778227 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:23.778234 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:23.778301 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:23.802589 1542350 cri.go:89] found id: ""
	I1213 16:17:23.802622 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.802631 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:23.802637 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:23.802708 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:23.832514 1542350 cri.go:89] found id: ""
	I1213 16:17:23.832548 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.832558 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:23.832569 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:23.832582 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:23.917876 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:23.909157   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.909608   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.910836   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.911543   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.913354   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:23.909157   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.909608   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.910836   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.911543   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.913354   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:23.917899 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:23.917918 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:23.943509 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:23.943548 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:23.971452 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:23.971478 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:24.027358 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:24.027396 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:26.545810 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:26.556391 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:26.556463 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:26.580187 1542350 cri.go:89] found id: ""
	I1213 16:17:26.580210 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.580219 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:26.580239 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:26.580300 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:26.608397 1542350 cri.go:89] found id: ""
	I1213 16:17:26.608420 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.608429 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:26.608435 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:26.608496 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:26.636638 1542350 cri.go:89] found id: ""
	I1213 16:17:26.636661 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.636669 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:26.636675 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:26.636734 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:26.665248 1542350 cri.go:89] found id: ""
	I1213 16:17:26.665274 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.665283 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:26.665289 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:26.665365 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:26.695808 1542350 cri.go:89] found id: ""
	I1213 16:17:26.695835 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.695854 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:26.695861 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:26.695918 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:26.721653 1542350 cri.go:89] found id: ""
	I1213 16:17:26.721678 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.721687 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:26.721693 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:26.721751 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:26.750218 1542350 cri.go:89] found id: ""
	I1213 16:17:26.750241 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.750250 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:26.750256 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:26.750313 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:26.777036 1542350 cri.go:89] found id: ""
	I1213 16:17:26.777059 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.777068 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:26.777077 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:26.777088 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:26.833887 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:26.833929 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:26.851275 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:26.851303 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:26.934951 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:26.926741   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.927535   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.929160   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.929458   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.930955   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:26.926741   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.927535   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.929160   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.929458   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.930955   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:26.934973 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:26.934985 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:26.960388 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:26.960424 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:29.488577 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:29.499475 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:29.499551 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:29.524176 1542350 cri.go:89] found id: ""
	I1213 16:17:29.524202 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.524212 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:29.524219 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:29.524281 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:29.558368 1542350 cri.go:89] found id: ""
	I1213 16:17:29.558393 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.558408 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:29.558415 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:29.558504 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:29.589170 1542350 cri.go:89] found id: ""
	I1213 16:17:29.589197 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.589206 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:29.589212 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:29.589273 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:29.621623 1542350 cri.go:89] found id: ""
	I1213 16:17:29.621697 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.621722 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:29.621741 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:29.621828 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:29.651459 1542350 cri.go:89] found id: ""
	I1213 16:17:29.651534 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.651557 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:29.651584 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:29.651712 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:29.676637 1542350 cri.go:89] found id: ""
	I1213 16:17:29.676663 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.676673 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:29.676679 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:29.676752 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:29.701821 1542350 cri.go:89] found id: ""
	I1213 16:17:29.701845 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.701855 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:29.701861 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:29.701920 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:29.726528 1542350 cri.go:89] found id: ""
	I1213 16:17:29.726555 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.726564 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:29.726574 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:29.726585 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:29.781999 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:29.782035 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:29.798088 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:29.798116 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:29.881323 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:29.870998   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.871883   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.873746   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.874663   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.876377   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:29.870998   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.871883   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.873746   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.874663   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.876377   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:29.881348 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:29.881361 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:29.911425 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:29.911464 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:32.442588 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:32.453594 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:32.453664 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:32.479865 1542350 cri.go:89] found id: ""
	I1213 16:17:32.479893 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.479902 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:32.479909 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:32.479975 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:32.505131 1542350 cri.go:89] found id: ""
	I1213 16:17:32.505159 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.505168 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:32.505175 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:32.505239 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:32.529697 1542350 cri.go:89] found id: ""
	I1213 16:17:32.529723 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.529732 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:32.529738 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:32.529796 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:32.554812 1542350 cri.go:89] found id: ""
	I1213 16:17:32.554834 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.554850 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:32.554856 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:32.554915 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:32.582244 1542350 cri.go:89] found id: ""
	I1213 16:17:32.582270 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.582279 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:32.582286 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:32.582347 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:32.613711 1542350 cri.go:89] found id: ""
	I1213 16:17:32.613738 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.613747 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:32.613754 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:32.613818 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:32.642070 1542350 cri.go:89] found id: ""
	I1213 16:17:32.642097 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.642106 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:32.642112 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:32.642168 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:32.667382 1542350 cri.go:89] found id: ""
	I1213 16:17:32.667406 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.667415 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:32.667424 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:32.667436 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:32.683777 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:32.683808 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:32.750802 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:32.740355   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.741255   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.742960   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.743298   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.746651   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:32.740355   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.741255   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.742960   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.743298   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.746651   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:32.750824 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:32.750838 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:32.776516 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:32.776551 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:32.809331 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:32.809358 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:35.374938 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:35.387203 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:35.387276 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:35.412099 1542350 cri.go:89] found id: ""
	I1213 16:17:35.412124 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.412133 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:35.412139 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:35.412195 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:35.436994 1542350 cri.go:89] found id: ""
	I1213 16:17:35.437031 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.437040 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:35.437047 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:35.437115 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:35.461531 1542350 cri.go:89] found id: ""
	I1213 16:17:35.461554 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.461562 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:35.461568 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:35.461627 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:35.486070 1542350 cri.go:89] found id: ""
	I1213 16:17:35.486095 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.486105 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:35.486118 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:35.486176 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:35.515476 1542350 cri.go:89] found id: ""
	I1213 16:17:35.515501 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.515510 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:35.515516 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:35.515576 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:35.545886 1542350 cri.go:89] found id: ""
	I1213 16:17:35.545959 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.545995 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:35.546020 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:35.546110 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:35.575465 1542350 cri.go:89] found id: ""
	I1213 16:17:35.575489 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.575498 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:35.575504 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:35.575563 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:35.607235 1542350 cri.go:89] found id: ""
	I1213 16:17:35.607264 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.607273 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:35.607282 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:35.607294 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:35.671811 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:35.671850 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:35.687939 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:35.687972 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:35.751714 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:35.742668   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.743226   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.744917   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.745517   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.747275   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:35.742668   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.743226   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.744917   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.745517   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.747275   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:35.751733 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:35.751746 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:35.777517 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:35.777554 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:38.308841 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:38.319569 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:38.319645 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:38.344249 1542350 cri.go:89] found id: ""
	I1213 16:17:38.344276 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.344285 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:38.344291 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:38.344349 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:38.368637 1542350 cri.go:89] found id: ""
	I1213 16:17:38.368666 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.368676 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:38.368682 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:38.368746 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:38.397310 1542350 cri.go:89] found id: ""
	I1213 16:17:38.397335 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.397344 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:38.397350 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:38.397409 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:38.426892 1542350 cri.go:89] found id: ""
	I1213 16:17:38.426967 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.426989 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:38.427008 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:38.427091 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:38.451400 1542350 cri.go:89] found id: ""
	I1213 16:17:38.451423 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.451432 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:38.451437 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:38.451500 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:38.476411 1542350 cri.go:89] found id: ""
	I1213 16:17:38.476433 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.476441 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:38.476448 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:38.476506 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:38.502060 1542350 cri.go:89] found id: ""
	I1213 16:17:38.502083 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.502092 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:38.502098 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:38.502158 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:38.527156 1542350 cri.go:89] found id: ""
	I1213 16:17:38.527217 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.527240 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:38.527264 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:38.527289 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:38.583123 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:38.583161 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:38.606934 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:38.607014 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:38.678774 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:38.669944   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.670742   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.672406   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.672726   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.674153   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:38.669944   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.670742   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.672406   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.672726   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.674153   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:38.678794 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:38.678806 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:38.703623 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:38.703656 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:41.235499 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:41.246098 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:41.246199 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:41.272817 1542350 cri.go:89] found id: ""
	I1213 16:17:41.272884 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.272907 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:41.272921 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:41.272995 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:41.297573 1542350 cri.go:89] found id: ""
	I1213 16:17:41.297599 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.297608 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:41.297614 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:41.297722 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:41.325595 1542350 cri.go:89] found id: ""
	I1213 16:17:41.325663 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.325695 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:41.325708 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:41.325784 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:41.350495 1542350 cri.go:89] found id: ""
	I1213 16:17:41.350519 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.350528 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:41.350534 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:41.350593 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:41.374833 1542350 cri.go:89] found id: ""
	I1213 16:17:41.374860 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.374869 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:41.374874 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:41.374931 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:41.400881 1542350 cri.go:89] found id: ""
	I1213 16:17:41.400911 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.400920 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:41.400926 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:41.400983 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:41.425159 1542350 cri.go:89] found id: ""
	I1213 16:17:41.425182 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.425191 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:41.425197 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:41.425255 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:41.449690 1542350 cri.go:89] found id: ""
	I1213 16:17:41.449765 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.449788 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:41.449808 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:41.449845 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:41.465414 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:41.465441 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:41.531758 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:41.522350   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.523854   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.524927   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.526433   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.526949   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:41.522350   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.523854   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.524927   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.526433   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.526949   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:41.531782 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:41.531795 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:41.557072 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:41.557104 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:41.589367 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:41.589397 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:44.161155 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:44.173267 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:44.173342 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:44.202655 1542350 cri.go:89] found id: ""
	I1213 16:17:44.202682 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.202692 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:44.202699 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:44.202758 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:44.227871 1542350 cri.go:89] found id: ""
	I1213 16:17:44.227897 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.227905 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:44.227911 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:44.227972 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:44.253446 1542350 cri.go:89] found id: ""
	I1213 16:17:44.253473 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.253481 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:44.253487 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:44.253543 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:44.279358 1542350 cri.go:89] found id: ""
	I1213 16:17:44.279383 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.279392 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:44.279398 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:44.279464 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:44.303249 1542350 cri.go:89] found id: ""
	I1213 16:17:44.303275 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.303284 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:44.303344 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:44.303410 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:44.327446 1542350 cri.go:89] found id: ""
	I1213 16:17:44.327471 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.327480 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:44.327486 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:44.327546 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:44.353767 1542350 cri.go:89] found id: ""
	I1213 16:17:44.353793 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.353802 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:44.353808 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:44.353865 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:44.382033 1542350 cri.go:89] found id: ""
	I1213 16:17:44.382060 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.382068 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:44.382078 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:44.382089 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:44.436599 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:44.436634 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:44.452268 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:44.452298 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:44.515099 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:44.507045   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.507744   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.509208   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.509703   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.511153   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:44.507045   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.507744   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.509208   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.509703   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.511153   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:44.515122 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:44.515134 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:44.540023 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:44.540059 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:47.069691 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:47.080543 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:47.080615 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:47.114986 1542350 cri.go:89] found id: ""
	I1213 16:17:47.115062 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.115085 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:47.115103 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:47.115194 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:47.148767 1542350 cri.go:89] found id: ""
	I1213 16:17:47.148840 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.148850 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:47.148857 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:47.148931 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:47.174407 1542350 cri.go:89] found id: ""
	I1213 16:17:47.174436 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.174445 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:47.174452 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:47.175791 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:47.207990 1542350 cri.go:89] found id: ""
	I1213 16:17:47.208024 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.208034 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:47.208041 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:47.208115 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:47.232910 1542350 cri.go:89] found id: ""
	I1213 16:17:47.232938 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.232947 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:47.232953 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:47.233015 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:47.256927 1542350 cri.go:89] found id: ""
	I1213 16:17:47.256952 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.256961 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:47.256967 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:47.257049 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:47.285254 1542350 cri.go:89] found id: ""
	I1213 16:17:47.285281 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.285290 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:47.285296 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:47.285356 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:47.309997 1542350 cri.go:89] found id: ""
	I1213 16:17:47.310027 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.310037 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:47.310046 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:47.310060 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:47.326038 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:47.326073 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:47.390775 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:47.383176   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.383650   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.385126   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.385506   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.386933   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:47.383176   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.383650   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.385126   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.385506   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.386933   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:47.390796 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:47.390809 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:47.415331 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:47.415362 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:47.442477 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:47.442503 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:50.000902 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:50.015948 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:50.016030 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:50.046794 1542350 cri.go:89] found id: ""
	I1213 16:17:50.046819 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.046827 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:50.046834 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:50.046890 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:50.073072 1542350 cri.go:89] found id: ""
	I1213 16:17:50.073106 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.073116 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:50.073124 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:50.073186 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:50.111358 1542350 cri.go:89] found id: ""
	I1213 16:17:50.111384 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.111393 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:50.111403 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:50.111468 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:50.141482 1542350 cri.go:89] found id: ""
	I1213 16:17:50.141510 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.141519 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:50.141525 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:50.141584 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:50.168684 1542350 cri.go:89] found id: ""
	I1213 16:17:50.168711 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.168720 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:50.168727 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:50.168806 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:50.194609 1542350 cri.go:89] found id: ""
	I1213 16:17:50.194633 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.194642 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:50.194648 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:50.194708 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:50.220707 1542350 cri.go:89] found id: ""
	I1213 16:17:50.220732 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.220741 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:50.220746 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:50.220810 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:50.245930 1542350 cri.go:89] found id: ""
	I1213 16:17:50.245956 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.245965 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:50.245975 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:50.245987 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:50.301111 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:50.301147 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:50.317024 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:50.317051 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:50.379354 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:50.370376   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.371063   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.372767   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.373333   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.375036   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:50.370376   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.371063   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.372767   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.373333   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.375036   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:50.379375 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:50.379388 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:50.403891 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:50.403925 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:52.933071 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:52.944075 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:52.944148 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:52.969292 1542350 cri.go:89] found id: ""
	I1213 16:17:52.969318 1542350 logs.go:282] 0 containers: []
	W1213 16:17:52.969327 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:52.969333 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:52.969393 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:52.997688 1542350 cri.go:89] found id: ""
	I1213 16:17:52.997717 1542350 logs.go:282] 0 containers: []
	W1213 16:17:52.997727 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:52.997733 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:52.997795 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:53.024102 1542350 cri.go:89] found id: ""
	I1213 16:17:53.024134 1542350 logs.go:282] 0 containers: []
	W1213 16:17:53.024144 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:53.024150 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:53.024214 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:53.054126 1542350 cri.go:89] found id: ""
	I1213 16:17:53.054149 1542350 logs.go:282] 0 containers: []
	W1213 16:17:53.054159 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:53.054165 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:53.054227 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:53.078840 1542350 cri.go:89] found id: ""
	I1213 16:17:53.078918 1542350 logs.go:282] 0 containers: []
	W1213 16:17:53.078940 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:53.078958 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:53.079041 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:53.134282 1542350 cri.go:89] found id: ""
	I1213 16:17:53.134313 1542350 logs.go:282] 0 containers: []
	W1213 16:17:53.134326 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:53.134332 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:53.134401 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:53.170263 1542350 cri.go:89] found id: ""
	I1213 16:17:53.170287 1542350 logs.go:282] 0 containers: []
	W1213 16:17:53.170296 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:53.170302 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:53.170366 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:53.195555 1542350 cri.go:89] found id: ""
	I1213 16:17:53.195578 1542350 logs.go:282] 0 containers: []
	W1213 16:17:53.195587 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:53.195596 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:53.195612 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:53.221475 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:53.221510 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:53.256145 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:53.256172 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:53.312142 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:53.312178 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:53.328755 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:53.328784 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:53.392981 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:53.384949   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.385331   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.386801   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.387094   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.388517   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:53.384949   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.385331   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.386801   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.387094   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.388517   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:55.894678 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:55.905837 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:55.905910 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:55.931137 1542350 cri.go:89] found id: ""
	I1213 16:17:55.931159 1542350 logs.go:282] 0 containers: []
	W1213 16:17:55.931168 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:55.931175 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:55.931236 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:55.955775 1542350 cri.go:89] found id: ""
	I1213 16:17:55.955801 1542350 logs.go:282] 0 containers: []
	W1213 16:17:55.955810 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:55.955817 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:55.955877 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:55.981227 1542350 cri.go:89] found id: ""
	I1213 16:17:55.981253 1542350 logs.go:282] 0 containers: []
	W1213 16:17:55.981262 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:55.981268 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:55.981329 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:56.008866 1542350 cri.go:89] found id: ""
	I1213 16:17:56.008892 1542350 logs.go:282] 0 containers: []
	W1213 16:17:56.008902 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:56.008909 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:56.008975 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:56.035606 1542350 cri.go:89] found id: ""
	I1213 16:17:56.035635 1542350 logs.go:282] 0 containers: []
	W1213 16:17:56.035644 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:56.035650 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:56.035712 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:56.061753 1542350 cri.go:89] found id: ""
	I1213 16:17:56.061780 1542350 logs.go:282] 0 containers: []
	W1213 16:17:56.061789 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:56.061795 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:56.061858 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:56.099036 1542350 cri.go:89] found id: ""
	I1213 16:17:56.099065 1542350 logs.go:282] 0 containers: []
	W1213 16:17:56.099074 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:56.099081 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:56.099142 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:56.133464 1542350 cri.go:89] found id: ""
	I1213 16:17:56.133491 1542350 logs.go:282] 0 containers: []
	W1213 16:17:56.133500 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:56.133510 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:56.133522 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:56.155287 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:56.155412 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:56.223561 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:56.214782   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.215583   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.217326   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.217944   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.219582   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:56.214782   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.215583   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.217326   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.217944   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.219582   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:56.223629 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:56.223650 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:56.249923 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:56.249965 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:56.280662 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:56.280692 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:58.836837 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:58.848594 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:58.848659 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:58.881904 1542350 cri.go:89] found id: ""
	I1213 16:17:58.881927 1542350 logs.go:282] 0 containers: []
	W1213 16:17:58.881935 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:58.881941 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:58.882001 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:58.917932 1542350 cri.go:89] found id: ""
	I1213 16:17:58.917954 1542350 logs.go:282] 0 containers: []
	W1213 16:17:58.917963 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:58.917969 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:58.918028 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:58.945580 1542350 cri.go:89] found id: ""
	I1213 16:17:58.945653 1542350 logs.go:282] 0 containers: []
	W1213 16:17:58.945668 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:58.945676 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:58.945753 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:58.971398 1542350 cri.go:89] found id: ""
	I1213 16:17:58.971424 1542350 logs.go:282] 0 containers: []
	W1213 16:17:58.971434 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:58.971440 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:58.971503 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:59.001302 1542350 cri.go:89] found id: ""
	I1213 16:17:59.001329 1542350 logs.go:282] 0 containers: []
	W1213 16:17:59.001339 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:59.001345 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:59.001409 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:59.028353 1542350 cri.go:89] found id: ""
	I1213 16:17:59.028379 1542350 logs.go:282] 0 containers: []
	W1213 16:17:59.028388 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:59.028394 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:59.028470 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:59.052548 1542350 cri.go:89] found id: ""
	I1213 16:17:59.052577 1542350 logs.go:282] 0 containers: []
	W1213 16:17:59.052586 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:59.052593 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:59.052653 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:59.077515 1542350 cri.go:89] found id: ""
	I1213 16:17:59.077541 1542350 logs.go:282] 0 containers: []
	W1213 16:17:59.077550 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:59.077560 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:59.077571 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:59.141173 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:59.141249 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:59.158291 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:59.158371 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:59.225799 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:59.216416   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.217255   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.219459   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.220025   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.221754   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:59.216416   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.217255   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.219459   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.220025   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.221754   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:59.225867 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:59.225890 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:59.251561 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:59.251597 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:01.784053 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:01.795325 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:01.795393 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:01.819579 1542350 cri.go:89] found id: ""
	I1213 16:18:01.819605 1542350 logs.go:282] 0 containers: []
	W1213 16:18:01.819615 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:01.819622 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:01.819683 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:01.857561 1542350 cri.go:89] found id: ""
	I1213 16:18:01.857588 1542350 logs.go:282] 0 containers: []
	W1213 16:18:01.857597 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:01.857604 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:01.857668 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:01.893605 1542350 cri.go:89] found id: ""
	I1213 16:18:01.893633 1542350 logs.go:282] 0 containers: []
	W1213 16:18:01.893642 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:01.893650 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:01.893706 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:01.931676 1542350 cri.go:89] found id: ""
	I1213 16:18:01.931783 1542350 logs.go:282] 0 containers: []
	W1213 16:18:01.931803 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:01.931812 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:01.931935 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:01.959175 1542350 cri.go:89] found id: ""
	I1213 16:18:01.959249 1542350 logs.go:282] 0 containers: []
	W1213 16:18:01.959272 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:01.959292 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:01.959398 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:01.984753 1542350 cri.go:89] found id: ""
	I1213 16:18:01.984784 1542350 logs.go:282] 0 containers: []
	W1213 16:18:01.984794 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:01.984800 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:01.984865 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:02.016830 1542350 cri.go:89] found id: ""
	I1213 16:18:02.016860 1542350 logs.go:282] 0 containers: []
	W1213 16:18:02.016870 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:02.016876 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:02.016939 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:02.042747 1542350 cri.go:89] found id: ""
	I1213 16:18:02.042776 1542350 logs.go:282] 0 containers: []
	W1213 16:18:02.042785 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:02.042794 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:02.042805 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:02.101057 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:02.101093 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:02.118948 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:02.118972 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:02.188051 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:02.179066   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.180018   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.180758   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.182317   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.182771   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:02.179066   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.180018   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.180758   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.182317   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.182771   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:02.188077 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:02.188091 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:02.214276 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:02.214316 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:04.742630 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:04.753656 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:04.753725 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:04.779281 1542350 cri.go:89] found id: ""
	I1213 16:18:04.779338 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.779349 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:04.779355 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:04.779418 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:04.806060 1542350 cri.go:89] found id: ""
	I1213 16:18:04.806099 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.806108 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:04.806114 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:04.806195 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:04.831390 1542350 cri.go:89] found id: ""
	I1213 16:18:04.831416 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.831425 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:04.831432 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:04.831501 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:04.865636 1542350 cri.go:89] found id: ""
	I1213 16:18:04.865663 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.865673 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:04.865680 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:04.865746 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:04.893812 1542350 cri.go:89] found id: ""
	I1213 16:18:04.893836 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.893845 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:04.893851 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:04.893916 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:04.922033 1542350 cri.go:89] found id: ""
	I1213 16:18:04.922062 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.922071 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:04.922077 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:04.922135 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:04.952026 1542350 cri.go:89] found id: ""
	I1213 16:18:04.952052 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.952061 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:04.952068 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:04.952129 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:04.979878 1542350 cri.go:89] found id: ""
	I1213 16:18:04.979901 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.979910 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:04.979919 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:04.979931 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:05.038448 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:05.038485 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:05.055056 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:05.055086 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:05.138791 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:05.128391   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.129071   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.130643   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.131070   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.134791   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:05.128391   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.129071   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.130643   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.131070   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.134791   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:05.138815 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:05.138828 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:05.170511 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:05.170549 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:07.701516 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:07.711811 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:07.711881 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:07.737115 1542350 cri.go:89] found id: ""
	I1213 16:18:07.737139 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.737148 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:07.737154 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:07.737216 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:07.761282 1542350 cri.go:89] found id: ""
	I1213 16:18:07.761305 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.761313 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:07.761319 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:07.761375 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:07.788777 1542350 cri.go:89] found id: ""
	I1213 16:18:07.788804 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.788813 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:07.788828 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:07.788893 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:07.813606 1542350 cri.go:89] found id: ""
	I1213 16:18:07.813633 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.813642 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:07.813650 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:07.813762 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:07.846070 1542350 cri.go:89] found id: ""
	I1213 16:18:07.846100 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.846109 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:07.846115 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:07.846178 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:07.877868 1542350 cri.go:89] found id: ""
	I1213 16:18:07.877894 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.877903 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:07.877909 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:07.877978 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:07.906297 1542350 cri.go:89] found id: ""
	I1213 16:18:07.906322 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.906331 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:07.906337 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:07.906411 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:07.935165 1542350 cri.go:89] found id: ""
	I1213 16:18:07.935191 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.935200 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:07.935209 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:07.935221 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:07.990632 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:07.990666 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:08.006620 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:08.006668 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:08.074292 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:08.065222   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.066181   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.067860   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.068200   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.069739   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:08.065222   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.066181   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.067860   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.068200   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.069739   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:08.074313 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:08.074338 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:08.103200 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:08.103236 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:10.643571 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:10.654051 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:10.654120 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:10.678184 1542350 cri.go:89] found id: ""
	I1213 16:18:10.678213 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.678222 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:10.678229 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:10.678286 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:10.714102 1542350 cri.go:89] found id: ""
	I1213 16:18:10.714129 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.714137 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:10.714143 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:10.714204 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:10.738091 1542350 cri.go:89] found id: ""
	I1213 16:18:10.738114 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.738123 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:10.738129 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:10.738187 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:10.762969 1542350 cri.go:89] found id: ""
	I1213 16:18:10.762996 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.763005 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:10.763010 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:10.763068 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:10.788695 1542350 cri.go:89] found id: ""
	I1213 16:18:10.788718 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.788726 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:10.788732 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:10.788790 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:10.813304 1542350 cri.go:89] found id: ""
	I1213 16:18:10.813331 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.813339 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:10.813346 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:10.813404 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:10.840988 1542350 cri.go:89] found id: ""
	I1213 16:18:10.841013 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.841022 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:10.841028 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:10.841085 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:10.872923 1542350 cri.go:89] found id: ""
	I1213 16:18:10.872947 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.872957 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:10.872966 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:10.872978 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:10.913313 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:10.913342 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:10.970044 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:10.970079 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:10.986369 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:10.986399 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:11.056440 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:11.047528   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.048477   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.050273   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.050853   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.052407   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:11.047528   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.048477   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.050273   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.050853   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.052407   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:11.056461 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:11.056474 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:13.582630 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:13.593495 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:13.593570 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:13.618406 1542350 cri.go:89] found id: ""
	I1213 16:18:13.618429 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.618438 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:13.618444 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:13.618503 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:13.643366 1542350 cri.go:89] found id: ""
	I1213 16:18:13.643392 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.643401 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:13.643407 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:13.643470 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:13.668878 1542350 cri.go:89] found id: ""
	I1213 16:18:13.668903 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.668912 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:13.668918 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:13.668976 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:13.694282 1542350 cri.go:89] found id: ""
	I1213 16:18:13.694309 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.694318 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:13.694324 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:13.694383 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:13.722288 1542350 cri.go:89] found id: ""
	I1213 16:18:13.722318 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.722326 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:13.722332 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:13.722391 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:13.749131 1542350 cri.go:89] found id: ""
	I1213 16:18:13.749156 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.749165 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:13.749177 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:13.749234 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:13.772877 1542350 cri.go:89] found id: ""
	I1213 16:18:13.772905 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.772915 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:13.772924 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:13.773024 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:13.797195 1542350 cri.go:89] found id: ""
	I1213 16:18:13.797222 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.797232 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:13.797241 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:13.797253 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:13.875404 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:13.862729   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.863769   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.867326   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.869105   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.869716   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:13.862729   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.863769   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.867326   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.869105   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.869716   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:13.875426 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:13.875439 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:13.907083 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:13.907122 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:13.940383 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:13.940412 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:13.999033 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:13.999073 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:16.517512 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:16.531616 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:16.531687 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:16.555921 1542350 cri.go:89] found id: ""
	I1213 16:18:16.555944 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.555952 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:16.555958 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:16.556017 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:16.585501 1542350 cri.go:89] found id: ""
	I1213 16:18:16.585523 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.585532 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:16.585538 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:16.585597 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:16.609776 1542350 cri.go:89] found id: ""
	I1213 16:18:16.609800 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.609810 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:16.609815 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:16.609874 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:16.633727 1542350 cri.go:89] found id: ""
	I1213 16:18:16.633801 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.633828 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:16.633847 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:16.633919 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:16.663010 1542350 cri.go:89] found id: ""
	I1213 16:18:16.663034 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.663042 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:16.663048 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:16.663104 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:16.689483 1542350 cri.go:89] found id: ""
	I1213 16:18:16.689506 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.689514 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:16.689521 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:16.689579 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:16.713920 1542350 cri.go:89] found id: ""
	I1213 16:18:16.713946 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.713955 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:16.713963 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:16.714023 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:16.739270 1542350 cri.go:89] found id: ""
	I1213 16:18:16.739297 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.739366 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:16.739377 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:16.739391 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:16.805237 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:16.796686   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.797427   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.799164   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.799670   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.801345   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:16.796686   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.797427   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.799164   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.799670   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.801345   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:16.805260 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:16.805272 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:16.830391 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:16.830421 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:16.875174 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:16.875203 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:16.940670 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:16.940707 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:19.457858 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:19.469305 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:19.469382 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:19.494702 1542350 cri.go:89] found id: ""
	I1213 16:18:19.494728 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.494739 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:19.494745 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:19.494805 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:19.526787 1542350 cri.go:89] found id: ""
	I1213 16:18:19.526811 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.526820 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:19.526826 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:19.526892 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:19.553929 1542350 cri.go:89] found id: ""
	I1213 16:18:19.553952 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.553961 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:19.553967 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:19.554025 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:19.578994 1542350 cri.go:89] found id: ""
	I1213 16:18:19.579021 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.579029 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:19.579036 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:19.579094 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:19.605160 1542350 cri.go:89] found id: ""
	I1213 16:18:19.605184 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.605202 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:19.605209 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:19.605271 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:19.629853 1542350 cri.go:89] found id: ""
	I1213 16:18:19.629880 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.629889 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:19.629896 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:19.629963 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:19.654551 1542350 cri.go:89] found id: ""
	I1213 16:18:19.654578 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.654588 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:19.654594 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:19.654674 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:19.679386 1542350 cri.go:89] found id: ""
	I1213 16:18:19.679410 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.679420 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:19.679429 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:19.679440 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:19.704792 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:19.704824 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:19.733848 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:19.733877 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:19.789321 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:19.789357 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:19.805414 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:19.805442 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:19.893754 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:19.884877   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.886081   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.886716   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.888200   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.888520   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:19.884877   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.886081   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.886716   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.888200   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.888520   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:22.394654 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:22.408580 1542350 out.go:203] 
	W1213 16:18:22.411606 1542350 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 16:18:22.411646 1542350 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 16:18:22.411657 1542350 out.go:285] * Related issues:
	W1213 16:18:22.411669 1542350 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1213 16:18:22.411682 1542350 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1213 16:18:22.414454 1542350 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.172900077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.172913106Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.172962434Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.172980173Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.172991151Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.173001884Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.173012173Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.173023233Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.173045772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.173088831Z" level=info msg="Connect containerd service"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.173368570Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.174111740Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.184422184Z" level=info msg="Start subscribing containerd event"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.184638121Z" level=info msg="Start recovering state"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.184605425Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.184847954Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.221873894Z" level=info msg="Start event monitor"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.221935570Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.221945818Z" level=info msg="Start streaming server"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.221955041Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.221964312Z" level=info msg="runtime interface starting up..."
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.221971163Z" level=info msg="starting plugins..."
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.222006157Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 16:12:20 newest-cni-526531 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.224181983Z" level=info msg="containerd successfully booted in 0.088659s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:25.570807   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:25.571374   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:25.573107   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:25.573574   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:25.575046   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.759368] overlayfs: idmapped layers are currently not supported
	[Dec13 13:37] overlayfs: idmapped layers are currently not supported
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 16:18:25 up  8:00,  0 user,  load average: 1.00, 0.75, 1.05
	Linux newest-cni-526531 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 16:18:22 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:18:22 newest-cni-526531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 13 16:18:22 newest-cni-526531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:18:22 newest-cni-526531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:18:22 newest-cni-526531 kubelet[13296]: E1213 16:18:22.907964   13296 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:18:22 newest-cni-526531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:18:22 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:18:23 newest-cni-526531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 13 16:18:23 newest-cni-526531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:18:23 newest-cni-526531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:18:23 newest-cni-526531 kubelet[13302]: E1213 16:18:23.645681   13302 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:18:23 newest-cni-526531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:18:23 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:18:24 newest-cni-526531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 13 16:18:24 newest-cni-526531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:18:24 newest-cni-526531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:18:24 newest-cni-526531 kubelet[13322]: E1213 16:18:24.409110   13322 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:18:24 newest-cni-526531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:18:24 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:18:25 newest-cni-526531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 13 16:18:25 newest-cni-526531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:18:25 newest-cni-526531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:18:25 newest-cni-526531 kubelet[13328]: E1213 16:18:25.138808   13328 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:18:25 newest-cni-526531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:18:25 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-526531 -n newest-cni-526531
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-526531 -n newest-cni-526531: exit status 2 (358.980357ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-526531" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (373.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (9.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-526531 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-526531 -n newest-cni-526531
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-526531 -n newest-cni-526531: exit status 2 (344.944801ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-526531 -n newest-cni-526531
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-526531 -n newest-cni-526531: exit status 2 (341.892043ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-526531 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-526531 -n newest-cni-526531
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-526531 -n newest-cni-526531: exit status 2 (326.637142ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-526531 -n newest-cni-526531
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-526531 -n newest-cni-526531: exit status 2 (339.499898ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-526531
helpers_test.go:244: (dbg) docker inspect newest-cni-526531:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54",
	        "Created": "2025-12-13T16:02:15.548035148Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1542480,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T16:12:14.158493479Z",
	            "FinishedAt": "2025-12-13T16:12:12.79865571Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54/hostname",
	        "HostsPath": "/var/lib/docker/containers/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54/hosts",
	        "LogPath": "/var/lib/docker/containers/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54-json.log",
	        "Name": "/newest-cni-526531",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-526531:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-526531",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54",
	                "LowerDir": "/var/lib/docker/overlay2/3024c4b0e975ebb8657bae012179f06255ab85cd84ba26121505b6c54622bc5d-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3024c4b0e975ebb8657bae012179f06255ab85cd84ba26121505b6c54622bc5d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3024c4b0e975ebb8657bae012179f06255ab85cd84ba26121505b6c54622bc5d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3024c4b0e975ebb8657bae012179f06255ab85cd84ba26121505b6c54622bc5d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-526531",
	                "Source": "/var/lib/docker/volumes/newest-cni-526531/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-526531",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-526531",
	                "name.minikube.sigs.k8s.io": "newest-cni-526531",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "57c40ce56d621d0f69c7bac6d3cb56a638b53bb82fd302b1930b9f51563e995b",
	            "SandboxKey": "/var/run/docker/netns/57c40ce56d62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34233"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34234"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34237"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34235"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34236"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-526531": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:43:0b:15:7e:21",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ae0d89b977ec0aa4cc17943d84decbf5f3cf47ff39573e4d4fdb9e9873e2828c",
	                    "EndpointID": "4d19fec2228064ef379084c28bbbd96c0fa36a4142ac70319780a70953fdc4e8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-526531",
	                        "dd2af60ccebf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-526531 -n newest-cni-526531
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-526531 -n newest-cni-526531: exit status 2 (350.800408ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-526531 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-526531 logs -n 25: (1.541497164s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p embed-certs-270324                                                                                                                                                                                                                                      │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p embed-certs-270324                                                                                                                                                                                                                                      │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p disable-driver-mounts-614298                                                                                                                                                                                                                            │ disable-driver-mounts-614298 │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ start   │ -p default-k8s-diff-port-946932 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 16:00 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-946932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:00 UTC │ 13 Dec 25 16:00 UTC │
	│ stop    │ -p default-k8s-diff-port-946932 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:00 UTC │ 13 Dec 25 16:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-946932 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:01 UTC │ 13 Dec 25 16:01 UTC │
	│ start   │ -p default-k8s-diff-port-946932 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:01 UTC │ 13 Dec 25 16:01 UTC │
	│ image   │ default-k8s-diff-port-946932 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ pause   │ -p default-k8s-diff-port-946932 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ unpause │ -p default-k8s-diff-port-946932 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ delete  │ -p default-k8s-diff-port-946932                                                                                                                                                                                                                            │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ delete  │ -p default-k8s-diff-port-946932                                                                                                                                                                                                                            │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ start   │ -p newest-cni-526531 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-439544 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │                     │
	│ stop    │ -p no-preload-439544 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:04 UTC │ 13 Dec 25 16:04 UTC │
	│ addons  │ enable dashboard -p no-preload-439544 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:04 UTC │ 13 Dec 25 16:04 UTC │
	│ start   │ -p no-preload-439544 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:04 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-526531 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:10 UTC │                     │
	│ stop    │ -p newest-cni-526531 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:12 UTC │ 13 Dec 25 16:12 UTC │
	│ addons  │ enable dashboard -p newest-cni-526531 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:12 UTC │ 13 Dec 25 16:12 UTC │
	│ start   │ -p newest-cni-526531 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:12 UTC │                     │
	│ image   │ newest-cni-526531 image list --format=json                                                                                                                                                                                                                 │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:18 UTC │ 13 Dec 25 16:18 UTC │
	│ pause   │ -p newest-cni-526531 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:18 UTC │ 13 Dec 25 16:18 UTC │
	│ unpause │ -p newest-cni-526531 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:18 UTC │ 13 Dec 25 16:18 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 16:12:13
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 16:12:13.872500 1542350 out.go:360] Setting OutFile to fd 1 ...
	I1213 16:12:13.872721 1542350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:12:13.872749 1542350 out.go:374] Setting ErrFile to fd 2...
	I1213 16:12:13.872769 1542350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:12:13.873083 1542350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 16:12:13.873513 1542350 out.go:368] Setting JSON to false
	I1213 16:12:13.874453 1542350 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":28483,"bootTime":1765613851,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 16:12:13.874604 1542350 start.go:143] virtualization:  
	I1213 16:12:13.877765 1542350 out.go:179] * [newest-cni-526531] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 16:12:13.881549 1542350 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 16:12:13.881619 1542350 notify.go:221] Checking for updates...
	I1213 16:12:13.887324 1542350 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 16:12:13.890274 1542350 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:12:13.893162 1542350 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 16:12:13.896033 1542350 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 16:12:13.898948 1542350 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 16:12:13.902364 1542350 config.go:182] Loaded profile config "newest-cni-526531": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:12:13.902980 1542350 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 16:12:13.935990 1542350 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 16:12:13.936167 1542350 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:12:14.000058 1542350 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:12:13.991072746 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:12:14.000167 1542350 docker.go:319] overlay module found
	I1213 16:12:14.005438 1542350 out.go:179] * Using the docker driver based on existing profile
	I1213 16:12:14.008564 1542350 start.go:309] selected driver: docker
	I1213 16:12:14.008597 1542350 start.go:927] validating driver "docker" against &{Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:12:14.008696 1542350 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 16:12:14.009457 1542350 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:12:14.067852 1542350 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:12:14.058134833 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:12:14.068237 1542350 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 16:12:14.068271 1542350 cni.go:84] Creating CNI manager for ""
	I1213 16:12:14.068329 1542350 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:12:14.068382 1542350 start.go:353] cluster config:
	{Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:12:14.071643 1542350 out.go:179] * Starting "newest-cni-526531" primary control-plane node in "newest-cni-526531" cluster
	I1213 16:12:14.074436 1542350 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 16:12:14.077449 1542350 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 16:12:14.080394 1542350 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:12:14.080442 1542350 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 16:12:14.080452 1542350 cache.go:65] Caching tarball of preloaded images
	I1213 16:12:14.080507 1542350 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 16:12:14.080564 1542350 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 16:12:14.080575 1542350 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 16:12:14.080690 1542350 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/config.json ...
	I1213 16:12:14.101187 1542350 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 16:12:14.101205 1542350 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 16:12:14.101219 1542350 cache.go:243] Successfully downloaded all kic artifacts
	I1213 16:12:14.101249 1542350 start.go:360] acquireMachinesLock for newest-cni-526531: {Name:mk8328ad899404812480e264edd6e13cbbd26230 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:12:14.101300 1542350 start.go:364] duration metric: took 35.502µs to acquireMachinesLock for "newest-cni-526531"
	I1213 16:12:14.101319 1542350 start.go:96] Skipping create...Using existing machine configuration
	I1213 16:12:14.101324 1542350 fix.go:54] fixHost starting: 
	I1213 16:12:14.101579 1542350 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:12:14.120089 1542350 fix.go:112] recreateIfNeeded on newest-cni-526531: state=Stopped err=<nil>
	W1213 16:12:14.120117 1542350 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 16:12:14.123566 1542350 out.go:252] * Restarting existing docker container for "newest-cni-526531" ...
	I1213 16:12:14.123658 1542350 cli_runner.go:164] Run: docker start newest-cni-526531
	I1213 16:12:14.407857 1542350 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:12:14.431483 1542350 kic.go:430] container "newest-cni-526531" state is running.
	I1213 16:12:14.431880 1542350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:12:14.455073 1542350 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/config.json ...
	I1213 16:12:14.455509 1542350 machine.go:94] provisionDockerMachine start ...
	I1213 16:12:14.455579 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:14.483076 1542350 main.go:143] libmachine: Using SSH client type: native
	I1213 16:12:14.483636 1542350 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34233 <nil> <nil>}
	I1213 16:12:14.483652 1542350 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 16:12:14.484350 1542350 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 16:12:17.634930 1542350 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-526531
	
	I1213 16:12:17.634954 1542350 ubuntu.go:182] provisioning hostname "newest-cni-526531"
	I1213 16:12:17.635019 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:17.654681 1542350 main.go:143] libmachine: Using SSH client type: native
	I1213 16:12:17.654996 1542350 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34233 <nil> <nil>}
	I1213 16:12:17.655008 1542350 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-526531 && echo "newest-cni-526531" | sudo tee /etc/hostname
	I1213 16:12:17.812861 1542350 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-526531
	
	I1213 16:12:17.812938 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:17.830348 1542350 main.go:143] libmachine: Using SSH client type: native
	I1213 16:12:17.830658 1542350 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34233 <nil> <nil>}
	I1213 16:12:17.830675 1542350 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-526531' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-526531/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-526531' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 16:12:17.987587 1542350 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 16:12:17.987621 1542350 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 16:12:17.987641 1542350 ubuntu.go:190] setting up certificates
	I1213 16:12:17.987659 1542350 provision.go:84] configureAuth start
	I1213 16:12:17.987726 1542350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:12:18.011145 1542350 provision.go:143] copyHostCerts
	I1213 16:12:18.011230 1542350 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 16:12:18.011240 1542350 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 16:12:18.011430 1542350 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 16:12:18.011569 1542350 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 16:12:18.011584 1542350 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 16:12:18.011623 1542350 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 16:12:18.011690 1542350 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 16:12:18.011698 1542350 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 16:12:18.011724 1542350 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 16:12:18.011776 1542350 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.newest-cni-526531 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-526531]
	I1213 16:12:18.508738 1542350 provision.go:177] copyRemoteCerts
	I1213 16:12:18.508811 1542350 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 16:12:18.508861 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:18.526422 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:18.636742 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 16:12:18.655155 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 16:12:18.674107 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 16:12:18.692128 1542350 provision.go:87] duration metric: took 704.439864ms to configureAuth
	I1213 16:12:18.692158 1542350 ubuntu.go:206] setting minikube options for container-runtime
	I1213 16:12:18.692373 1542350 config.go:182] Loaded profile config "newest-cni-526531": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:12:18.692387 1542350 machine.go:97] duration metric: took 4.236863655s to provisionDockerMachine
	I1213 16:12:18.692395 1542350 start.go:293] postStartSetup for "newest-cni-526531" (driver="docker")
	I1213 16:12:18.692409 1542350 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 16:12:18.692476 1542350 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 16:12:18.692523 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:18.710444 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:18.815900 1542350 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 16:12:18.819552 1542350 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 16:12:18.819582 1542350 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 16:12:18.819595 1542350 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 16:12:18.819651 1542350 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 16:12:18.819740 1542350 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 16:12:18.819846 1542350 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 16:12:18.827635 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:12:18.845967 1542350 start.go:296] duration metric: took 153.553828ms for postStartSetup
	I1213 16:12:18.846048 1542350 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 16:12:18.846103 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:18.863404 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:18.964333 1542350 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 16:12:18.969276 1542350 fix.go:56] duration metric: took 4.867943668s for fixHost
	I1213 16:12:18.969308 1542350 start.go:83] releasing machines lock for "newest-cni-526531", held for 4.867999692s
	I1213 16:12:18.969378 1542350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:12:18.986065 1542350 ssh_runner.go:195] Run: cat /version.json
	I1213 16:12:18.986168 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:18.986433 1542350 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 16:12:18.986485 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:19.008809 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:19.015681 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:19.197190 1542350 ssh_runner.go:195] Run: systemctl --version
	I1213 16:12:19.203734 1542350 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 16:12:19.208293 1542350 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 16:12:19.208365 1542350 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 16:12:19.216699 1542350 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 16:12:19.216724 1542350 start.go:496] detecting cgroup driver to use...
	I1213 16:12:19.216769 1542350 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 16:12:19.216822 1542350 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 16:12:19.235051 1542350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 16:12:19.248627 1542350 docker.go:218] disabling cri-docker service (if available) ...
	I1213 16:12:19.248695 1542350 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 16:12:19.264536 1542350 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 16:12:19.278273 1542350 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 16:12:19.415282 1542350 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 16:12:19.542944 1542350 docker.go:234] disabling docker service ...
	I1213 16:12:19.543049 1542350 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 16:12:19.558893 1542350 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 16:12:19.572698 1542350 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 16:12:19.700893 1542350 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 16:12:19.830331 1542350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 16:12:19.843617 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 16:12:19.858193 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 16:12:19.867834 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 16:12:19.877291 1542350 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 16:12:19.877362 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 16:12:19.886078 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:12:19.894812 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 16:12:19.903917 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:12:19.912720 1542350 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 16:12:19.921167 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 16:12:19.930798 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 16:12:19.940230 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 16:12:19.950040 1542350 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 16:12:19.958360 1542350 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 16:12:19.966286 1542350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:12:20.089676 1542350 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 16:12:20.224467 1542350 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 16:12:20.224608 1542350 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 16:12:20.228661 1542350 start.go:564] Will wait 60s for crictl version
	I1213 16:12:20.228772 1542350 ssh_runner.go:195] Run: which crictl
	I1213 16:12:20.232454 1542350 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 16:12:20.257719 1542350 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 16:12:20.257840 1542350 ssh_runner.go:195] Run: containerd --version
	I1213 16:12:20.279500 1542350 ssh_runner.go:195] Run: containerd --version
	I1213 16:12:20.302783 1542350 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 16:12:20.305579 1542350 cli_runner.go:164] Run: docker network inspect newest-cni-526531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 16:12:20.322844 1542350 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 16:12:20.326903 1542350 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:12:20.339926 1542350 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 16:12:20.342782 1542350 kubeadm.go:884] updating cluster {Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 16:12:20.342928 1542350 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:12:20.343016 1542350 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:12:20.367771 1542350 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:12:20.367795 1542350 containerd.go:534] Images already preloaded, skipping extraction
	I1213 16:12:20.367857 1542350 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:12:20.393096 1542350 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:12:20.393118 1542350 cache_images.go:86] Images are preloaded, skipping loading
	I1213 16:12:20.393126 1542350 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 16:12:20.393232 1542350 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-526531 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 16:12:20.393305 1542350 ssh_runner.go:195] Run: sudo crictl info
	I1213 16:12:20.418251 1542350 cni.go:84] Creating CNI manager for ""
	I1213 16:12:20.418277 1542350 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:12:20.418295 1542350 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 16:12:20.418318 1542350 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-526531 NodeName:newest-cni-526531 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 16:12:20.418435 1542350 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-526531"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 16:12:20.418510 1542350 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 16:12:20.426561 1542350 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 16:12:20.426663 1542350 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 16:12:20.434234 1542350 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 16:12:20.447269 1542350 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 16:12:20.459764 1542350 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 16:12:20.473147 1542350 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 16:12:20.476975 1542350 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:12:20.486881 1542350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:12:20.634044 1542350 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:12:20.650082 1542350 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531 for IP: 192.168.76.2
	I1213 16:12:20.650107 1542350 certs.go:195] generating shared ca certs ...
	I1213 16:12:20.650125 1542350 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:12:20.650260 1542350 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 16:12:20.650315 1542350 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 16:12:20.650327 1542350 certs.go:257] generating profile certs ...
	I1213 16:12:20.650431 1542350 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.key
	I1213 16:12:20.650494 1542350 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key.b007e6c7
	I1213 16:12:20.650541 1542350 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key
	I1213 16:12:20.650652 1542350 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 16:12:20.650691 1542350 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 16:12:20.650704 1542350 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 16:12:20.650731 1542350 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 16:12:20.650764 1542350 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 16:12:20.650791 1542350 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 16:12:20.650844 1542350 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:12:20.651682 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 16:12:20.679737 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 16:12:20.697714 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 16:12:20.716102 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 16:12:20.734754 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 16:12:20.752380 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 16:12:20.770335 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 16:12:20.787592 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1213 16:12:20.805866 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 16:12:20.823616 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 16:12:20.845606 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 16:12:20.863659 1542350 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 16:12:20.877321 1542350 ssh_runner.go:195] Run: openssl version
	I1213 16:12:20.884096 1542350 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 16:12:20.891462 1542350 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 16:12:20.900719 1542350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 16:12:20.905878 1542350 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 16:12:20.905990 1542350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 16:12:20.952615 1542350 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 16:12:20.960412 1542350 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:12:20.967994 1542350 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 16:12:20.975909 1542350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:12:20.979941 1542350 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:12:20.980042 1542350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:12:21.021453 1542350 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 16:12:21.029467 1542350 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 16:12:21.037114 1542350 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 16:12:21.045054 1542350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 16:12:21.049353 1542350 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 16:12:21.049420 1542350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 16:12:21.090431 1542350 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 16:12:21.097998 1542350 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 16:12:21.101759 1542350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 16:12:21.142651 1542350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 16:12:21.183449 1542350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 16:12:21.224713 1542350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 16:12:21.267101 1542350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 16:12:21.308542 1542350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 16:12:21.350324 1542350 kubeadm.go:401] StartCluster: {Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:12:21.350489 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 16:12:21.350594 1542350 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 16:12:21.381089 1542350 cri.go:89] found id: ""
	I1213 16:12:21.381225 1542350 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 16:12:21.391210 1542350 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 16:12:21.391281 1542350 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 16:12:21.391387 1542350 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 16:12:21.399153 1542350 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 16:12:21.399882 1542350 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-526531" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:12:21.400209 1542350 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-1251074/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-526531" cluster setting kubeconfig missing "newest-cni-526531" context setting]
	I1213 16:12:21.400761 1542350 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:12:21.402579 1542350 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 16:12:21.410218 1542350 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1213 16:12:21.410252 1542350 kubeadm.go:602] duration metric: took 18.943347ms to restartPrimaryControlPlane
	I1213 16:12:21.410262 1542350 kubeadm.go:403] duration metric: took 59.957451ms to StartCluster
	I1213 16:12:21.410276 1542350 settings.go:142] acquiring lock: {Name:mk3e38eb9635dd950ddf8081cc0598a8c10049c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:12:21.410337 1542350 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:12:21.411206 1542350 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:12:21.411496 1542350 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 16:12:21.411842 1542350 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 16:12:21.411918 1542350 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-526531"
	I1213 16:12:21.411932 1542350 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-526531"
	I1213 16:12:21.411959 1542350 host.go:66] Checking if "newest-cni-526531" exists ...
	I1213 16:12:21.412409 1542350 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:12:21.412632 1542350 config.go:182] Loaded profile config "newest-cni-526531": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:12:21.412699 1542350 addons.go:70] Setting dashboard=true in profile "newest-cni-526531"
	I1213 16:12:21.412715 1542350 addons.go:239] Setting addon dashboard=true in "newest-cni-526531"
	W1213 16:12:21.412722 1542350 addons.go:248] addon dashboard should already be in state true
	I1213 16:12:21.412753 1542350 host.go:66] Checking if "newest-cni-526531" exists ...
	I1213 16:12:21.413150 1542350 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:12:21.417035 1542350 addons.go:70] Setting default-storageclass=true in profile "newest-cni-526531"
	I1213 16:12:21.417076 1542350 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-526531"
	I1213 16:12:21.417425 1542350 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:12:21.417785 1542350 out.go:179] * Verifying Kubernetes components...
	I1213 16:12:21.420756 1542350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:12:21.445354 1542350 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 16:12:21.448121 1542350 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:12:21.448150 1542350 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 16:12:21.448220 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:21.451677 1542350 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 16:12:21.454559 1542350 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 16:12:21.457364 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 16:12:21.457390 1542350 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 16:12:21.457468 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:21.461079 1542350 addons.go:239] Setting addon default-storageclass=true in "newest-cni-526531"
	I1213 16:12:21.461127 1542350 host.go:66] Checking if "newest-cni-526531" exists ...
	I1213 16:12:21.461533 1542350 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:12:21.475798 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:21.512911 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:21.534060 1542350 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 16:12:21.534082 1542350 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 16:12:21.534143 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:21.567579 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:21.655778 1542350 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:12:21.660712 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:12:21.695006 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 16:12:21.695031 1542350 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 16:12:21.711844 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 16:12:21.711868 1542350 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 16:12:21.726264 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 16:12:21.726287 1542350 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 16:12:21.742159 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 16:12:21.742183 1542350 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 16:12:21.759213 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 16:12:21.759234 1542350 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 16:12:21.769713 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 16:12:21.791192 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 16:12:21.791260 1542350 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 16:12:21.814992 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 16:12:21.815063 1542350 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 16:12:21.830895 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 16:12:21.830972 1542350 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 16:12:21.849742 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:12:21.849815 1542350 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 16:12:21.864289 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:12:22.085788 1542350 api_server.go:52] waiting for apiserver process to appear ...
	I1213 16:12:22.085922 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 16:12:22.086102 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.086159 1542350 retry.go:31] will retry after 179.056392ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:12:22.086246 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.086353 1542350 retry.go:31] will retry after 181.278424ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:12:22.086609 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.086645 1542350 retry.go:31] will retry after 135.21458ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.222538 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:12:22.266024 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:12:22.268540 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:22.304395 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.304479 1542350 retry.go:31] will retry after 553.734459ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:12:22.383592 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.383626 1542350 retry.go:31] will retry after 310.627988ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:12:22.384428 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.384454 1542350 retry.go:31] will retry after 477.647599ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.586862 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:22.695343 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:22.754692 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.754771 1542350 retry.go:31] will retry after 349.01084ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.858966 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:12:22.862536 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:22.953516 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.953561 1542350 retry.go:31] will retry after 343.489775ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:12:22.953788 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.953849 1542350 retry.go:31] will retry after 703.913124ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.086088 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:23.104680 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:23.181935 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.181974 1542350 retry.go:31] will retry after 792.501261ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.297213 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:23.357629 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.357664 1542350 retry.go:31] will retry after 710.733017ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.586938 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:23.658890 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:12:23.729079 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.729127 1542350 retry.go:31] will retry after 642.679357ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.975021 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:24.036696 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.036729 1542350 retry.go:31] will retry after 1.762152539s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.068939 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 16:12:24.086560 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 16:12:24.136068 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.136100 1542350 retry.go:31] will retry after 670.883469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.372395 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:12:24.444952 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.444996 1542350 retry.go:31] will retry after 1.594344916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.586388 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:24.807252 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:24.873210 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.873241 1542350 retry.go:31] will retry after 1.504699438s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:25.086635 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:25.586697 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:25.799081 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:25.864095 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:25.864173 1542350 retry.go:31] will retry after 2.833515163s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:26.040555 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:12:26.086244 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 16:12:26.134589 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:26.134626 1542350 retry.go:31] will retry after 2.268954348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:26.378204 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:26.437143 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:26.437179 1542350 retry.go:31] will retry after 2.009206759s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:26.586404 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:27.086045 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:27.586074 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:28.086070 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:28.404537 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:12:28.446967 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:28.469203 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:28.469234 1542350 retry.go:31] will retry after 1.799417627s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:12:28.516574 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:28.516611 1542350 retry.go:31] will retry after 2.723803306s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:28.586847 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:28.698086 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:28.762693 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:28.762729 1542350 retry.go:31] will retry after 1.577559772s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:29.086307 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:29.586102 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:30.086078 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:30.269847 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:12:30.336710 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:30.336749 1542350 retry.go:31] will retry after 2.535864228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:30.341075 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:30.419871 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:30.419902 1542350 retry.go:31] will retry after 2.188608586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:30.586056 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:31.086792 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:31.241343 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:31.303140 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:31.303175 1542350 retry.go:31] will retry after 4.008884548s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:31.586821 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:32.086175 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:32.587018 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:32.608868 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:32.689818 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:32.689856 1542350 retry.go:31] will retry after 5.074576061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:32.873213 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:12:32.940949 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:32.940984 1542350 retry.go:31] will retry after 7.456449925s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:33.086429 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:33.586022 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:34.086094 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:34.585998 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:35.086896 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:35.312254 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:35.377660 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:35.377698 1542350 retry.go:31] will retry after 9.192453055s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:35.587034 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:36.086843 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:36.586051 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:37.086838 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:37.586771 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:37.765048 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:37.824278 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:37.824312 1542350 retry.go:31] will retry after 11.772995815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:38.086864 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:38.586073 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:39.086969 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:39.586055 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:40.086122 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:40.398539 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:12:40.468470 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:40.468513 1542350 retry.go:31] will retry after 13.248485713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:40.586656 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:41.086065 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:41.586366 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:42.086189 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:42.586086 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:43.086089 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:43.586027 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:44.086574 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:44.570741 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 16:12:44.586247 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 16:12:44.654442 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:44.654477 1542350 retry.go:31] will retry after 14.969470504s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:45.086353 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:45.586835 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:46.086082 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:46.586716 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:47.086075 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:47.586621 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:48.086124 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:48.586928 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:49.087028 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:49.586115 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:49.597980 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:49.660643 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:49.660672 1542350 retry.go:31] will retry after 11.077380605s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:50.086194 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:50.586148 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:51.086673 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:51.586443 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:52.086098 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:52.586095 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:53.086117 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:53.586714 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:53.717290 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:12:53.777883 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:53.777918 1542350 retry.go:31] will retry after 17.242726639s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:54.086154 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:54.586837 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:55.086738 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:55.586843 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:56.086112 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:56.586065 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:57.087033 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:57.587026 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:58.086821 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:58.586066 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:59.086344 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:59.586987 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:59.624396 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:59.692077 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:59.692113 1542350 retry.go:31] will retry after 25.118824905s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:00.086703 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:00.586076 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:00.738326 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:13:00.797829 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:00.797860 1542350 retry.go:31] will retry after 28.273971977s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:01.086109 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:01.586093 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:02.086800 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:02.586059 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:03.086118 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:03.586099 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:04.086574 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:04.586119 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:05.087001 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:05.586735 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:06.087021 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:06.586098 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:07.086059 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:07.586074 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:08.086071 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:08.586627 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:09.086132 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:09.586339 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:10.086956 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:10.586102 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:11.020938 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:13:11.086782 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 16:13:11.098002 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:11.098037 1542350 retry.go:31] will retry after 28.022573365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:11.586801 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:12.086121 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:12.586779 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:13.086780 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:13.586110 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:14.086075 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:14.586725 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:15.086688 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:15.587040 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:16.086588 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:16.586972 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:17.086881 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:17.586014 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:18.086609 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:18.586065 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:19.086985 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:19.586109 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:20.086095 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:20.586709 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:21.086130 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:21.586680 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:21.586792 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:21.614864 1542350 cri.go:89] found id: ""
	I1213 16:13:21.614885 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.614894 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:21.614901 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:21.614963 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:21.646495 1542350 cri.go:89] found id: ""
	I1213 16:13:21.646517 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.646525 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:21.646532 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:21.646592 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:21.676251 1542350 cri.go:89] found id: ""
	I1213 16:13:21.676274 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.676283 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:21.676289 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:21.676358 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:21.706048 1542350 cri.go:89] found id: ""
	I1213 16:13:21.706075 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.706084 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:21.706093 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:21.706167 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:21.733595 1542350 cri.go:89] found id: ""
	I1213 16:13:21.733620 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.733628 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:21.733634 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:21.733694 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:21.758418 1542350 cri.go:89] found id: ""
	I1213 16:13:21.758444 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.758453 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:21.758459 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:21.758520 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:21.782936 1542350 cri.go:89] found id: ""
	I1213 16:13:21.782962 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.782970 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:21.782976 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:21.783038 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:21.807262 1542350 cri.go:89] found id: ""
	I1213 16:13:21.807289 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.807298 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:21.807327 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:21.807340 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:21.862632 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:21.862670 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:21.879878 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:21.879905 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:21.954675 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:21.946473    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.946958    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.948653    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.949095    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.950567    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:21.946473    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.946958    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.948653    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.949095    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.950567    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:21.954699 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:21.954712 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:21.980443 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:21.980489 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:24.514188 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:24.524708 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:24.524788 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:24.549819 1542350 cri.go:89] found id: ""
	I1213 16:13:24.549840 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.549848 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:24.549866 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:24.549925 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:24.574754 1542350 cri.go:89] found id: ""
	I1213 16:13:24.574781 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.574790 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:24.574795 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:24.574857 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:24.606443 1542350 cri.go:89] found id: ""
	I1213 16:13:24.606465 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.606474 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:24.606481 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:24.606542 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:24.638639 1542350 cri.go:89] found id: ""
	I1213 16:13:24.638660 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.638668 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:24.638674 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:24.638733 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:24.671023 1542350 cri.go:89] found id: ""
	I1213 16:13:24.671046 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.671055 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:24.671063 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:24.671137 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:24.697378 1542350 cri.go:89] found id: ""
	I1213 16:13:24.697405 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.697414 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:24.697420 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:24.697497 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:24.722594 1542350 cri.go:89] found id: ""
	I1213 16:13:24.722621 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.722631 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:24.722637 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:24.722728 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:24.746821 1542350 cri.go:89] found id: ""
	I1213 16:13:24.746850 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.746860 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:24.746878 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:24.746891 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:24.763249 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:24.763286 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 16:13:24.811678 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:13:24.851435 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:24.840548    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.841317    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.843094    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.843728    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.845484    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:24.840548    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.841317    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.843094    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.843728    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.845484    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:24.851500 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:24.851539 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1213 16:13:24.879668 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:24.879746 1542350 retry.go:31] will retry after 33.423455906s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:24.890839 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:24.890870 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:24.920848 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:24.920877 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:27.476632 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:27.488585 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:27.488659 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:27.518011 1542350 cri.go:89] found id: ""
	I1213 16:13:27.518034 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.518042 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:27.518049 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:27.518110 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:27.543732 1542350 cri.go:89] found id: ""
	I1213 16:13:27.543759 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.543771 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:27.543777 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:27.543862 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:27.568999 1542350 cri.go:89] found id: ""
	I1213 16:13:27.569025 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.569033 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:27.569039 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:27.569097 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:27.607884 1542350 cri.go:89] found id: ""
	I1213 16:13:27.607913 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.607921 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:27.607928 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:27.607987 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:27.644349 1542350 cri.go:89] found id: ""
	I1213 16:13:27.644376 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.644384 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:27.644390 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:27.644461 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:27.676832 1542350 cri.go:89] found id: ""
	I1213 16:13:27.676860 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.676870 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:27.676875 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:27.676934 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:27.702113 1542350 cri.go:89] found id: ""
	I1213 16:13:27.702142 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.702151 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:27.702157 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:27.702219 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:27.727737 1542350 cri.go:89] found id: ""
	I1213 16:13:27.727763 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.727772 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:27.727782 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:27.727795 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:27.782283 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:27.782317 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:27.800167 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:27.800195 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:27.871267 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:27.862487    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.863167    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.864974    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.865561    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.867199    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:27.862487    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.863167    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.864974    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.865561    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.867199    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:27.871378 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:27.871398 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:27.896932 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:27.896972 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:29.072145 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:13:29.152200 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:29.152237 1542350 retry.go:31] will retry after 45.772066333s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:30.424283 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:30.435064 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:30.435141 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:30.458954 1542350 cri.go:89] found id: ""
	I1213 16:13:30.458977 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.458985 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:30.458991 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:30.459050 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:30.482988 1542350 cri.go:89] found id: ""
	I1213 16:13:30.483016 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.483025 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:30.483031 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:30.483089 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:30.508669 1542350 cri.go:89] found id: ""
	I1213 16:13:30.508695 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.508704 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:30.508710 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:30.508797 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:30.532450 1542350 cri.go:89] found id: ""
	I1213 16:13:30.532543 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.532561 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:30.532569 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:30.532643 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:30.561998 1542350 cri.go:89] found id: ""
	I1213 16:13:30.562026 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.562035 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:30.562041 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:30.562132 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:30.600654 1542350 cri.go:89] found id: ""
	I1213 16:13:30.600688 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.600703 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:30.600711 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:30.600824 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:30.628653 1542350 cri.go:89] found id: ""
	I1213 16:13:30.628724 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.628758 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:30.628798 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:30.628886 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:30.659930 1542350 cri.go:89] found id: ""
	I1213 16:13:30.660009 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.660032 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:30.660049 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:30.660076 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:30.717289 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:30.717327 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:30.733637 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:30.733668 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:30.804923 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:30.797049    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.797717    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.799180    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.799496    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.800891    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:30.797049    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.797717    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.799180    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.799496    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.800891    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:30.804949 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:30.804966 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:30.830439 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:30.830482 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:33.359431 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:33.370707 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:33.370778 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:33.404091 1542350 cri.go:89] found id: ""
	I1213 16:13:33.404114 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.404135 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:33.404141 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:33.404200 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:33.432896 1542350 cri.go:89] found id: ""
	I1213 16:13:33.432922 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.432931 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:33.432937 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:33.433006 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:33.457244 1542350 cri.go:89] found id: ""
	I1213 16:13:33.457271 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.457280 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:33.457285 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:33.457343 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:33.482368 1542350 cri.go:89] found id: ""
	I1213 16:13:33.482389 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.482397 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:33.482403 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:33.482463 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:33.506253 1542350 cri.go:89] found id: ""
	I1213 16:13:33.506276 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.506284 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:33.506290 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:33.506350 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:33.532337 1542350 cri.go:89] found id: ""
	I1213 16:13:33.532362 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.532371 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:33.532377 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:33.532435 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:33.557859 1542350 cri.go:89] found id: ""
	I1213 16:13:33.557887 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.557896 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:33.557902 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:33.557961 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:33.585180 1542350 cri.go:89] found id: ""
	I1213 16:13:33.585208 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.585216 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:33.585226 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:33.585249 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:33.626301 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:33.626332 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:33.693048 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:33.693086 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:33.709482 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:33.709550 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:33.779437 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:33.771529    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.771977    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.773496    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.773820    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.775408    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:33.771529    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.771977    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.773496    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.773820    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.775408    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:33.779461 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:33.779476 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:36.314080 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:36.324714 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:36.324793 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:36.352949 1542350 cri.go:89] found id: ""
	I1213 16:13:36.353025 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.353048 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:36.353066 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:36.353159 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:36.384496 1542350 cri.go:89] found id: ""
	I1213 16:13:36.384563 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.384586 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:36.384603 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:36.384690 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:36.418779 1542350 cri.go:89] found id: ""
	I1213 16:13:36.418842 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.418866 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:36.418884 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:36.418968 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:36.448378 1542350 cri.go:89] found id: ""
	I1213 16:13:36.448420 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.448429 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:36.448445 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:36.448524 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:36.473284 1542350 cri.go:89] found id: ""
	I1213 16:13:36.473361 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.473376 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:36.473383 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:36.473454 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:36.500619 1542350 cri.go:89] found id: ""
	I1213 16:13:36.500642 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.500651 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:36.500663 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:36.500724 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:36.529444 1542350 cri.go:89] found id: ""
	I1213 16:13:36.529517 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.529532 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:36.529539 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:36.529609 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:36.553861 1542350 cri.go:89] found id: ""
	I1213 16:13:36.553886 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.553894 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:36.553904 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:36.553915 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:36.610671 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:36.610704 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:36.628462 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:36.628544 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:36.705883 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:36.697914    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.698707    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.700211    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.700636    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.702077    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:36.697914    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.698707    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.700211    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.700636    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.702077    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:36.705906 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:36.705918 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:36.730607 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:36.730646 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:39.121733 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:13:39.184741 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:39.184777 1542350 retry.go:31] will retry after 19.299456104s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:39.259892 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:39.271332 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:39.271403 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:39.300612 1542350 cri.go:89] found id: ""
	I1213 16:13:39.300637 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.300646 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:39.300652 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:39.300712 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:39.324641 1542350 cri.go:89] found id: ""
	I1213 16:13:39.324666 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.324675 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:39.324680 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:39.324739 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:39.356074 1542350 cri.go:89] found id: ""
	I1213 16:13:39.356099 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.356108 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:39.356114 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:39.356178 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:39.383742 1542350 cri.go:89] found id: ""
	I1213 16:13:39.383766 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.383775 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:39.383781 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:39.383846 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:39.411271 1542350 cri.go:89] found id: ""
	I1213 16:13:39.411297 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.411305 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:39.411334 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:39.411395 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:39.437295 1542350 cri.go:89] found id: ""
	I1213 16:13:39.437321 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.437329 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:39.437336 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:39.437419 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:39.462328 1542350 cri.go:89] found id: ""
	I1213 16:13:39.462352 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.462361 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:39.462368 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:39.462445 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:39.486926 1542350 cri.go:89] found id: ""
	I1213 16:13:39.486951 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.486961 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:39.486970 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:39.486986 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:39.545864 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:39.545902 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:39.561750 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:39.561780 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:39.648853 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:39.635798    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.636607    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.637647    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.638390    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.642907    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:39.635798    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.636607    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.637647    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.638390    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.642907    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:39.648878 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:39.648893 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:39.674238 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:39.674280 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:42.203005 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:42.217190 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:42.217290 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:42.248179 1542350 cri.go:89] found id: ""
	I1213 16:13:42.248214 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.248224 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:42.248231 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:42.248315 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:42.281373 1542350 cri.go:89] found id: ""
	I1213 16:13:42.281400 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.281409 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:42.281416 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:42.281481 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:42.313298 1542350 cri.go:89] found id: ""
	I1213 16:13:42.313327 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.313343 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:42.313351 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:42.313419 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:42.347164 1542350 cri.go:89] found id: ""
	I1213 16:13:42.347256 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.347274 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:42.347282 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:42.347421 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:42.377063 1542350 cri.go:89] found id: ""
	I1213 16:13:42.377097 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.377105 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:42.377112 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:42.377195 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:42.404395 1542350 cri.go:89] found id: ""
	I1213 16:13:42.404430 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.404439 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:42.404446 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:42.404522 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:42.429038 1542350 cri.go:89] found id: ""
	I1213 16:13:42.429112 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.429128 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:42.429135 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:42.429202 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:42.453891 1542350 cri.go:89] found id: ""
	I1213 16:13:42.453935 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.453944 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:42.453954 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:42.453970 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:42.509865 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:42.509901 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:42.525994 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:42.526022 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:42.601177 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:42.590564    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.592469    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.593053    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.594708    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.595182    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:42.590564    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.592469    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.593053    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.594708    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.595182    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:42.601257 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:42.601292 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:42.630417 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:42.630495 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:45.167780 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:45.186685 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:45.186786 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:45.266905 1542350 cri.go:89] found id: ""
	I1213 16:13:45.266931 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.266941 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:45.266948 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:45.267020 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:45.302244 1542350 cri.go:89] found id: ""
	I1213 16:13:45.302273 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.302283 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:45.302289 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:45.302368 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:45.330669 1542350 cri.go:89] found id: ""
	I1213 16:13:45.330697 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.330707 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:45.330713 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:45.330777 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:45.368642 1542350 cri.go:89] found id: ""
	I1213 16:13:45.368677 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.368685 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:45.368692 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:45.368753 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:45.407608 1542350 cri.go:89] found id: ""
	I1213 16:13:45.407631 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.407639 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:45.407645 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:45.407706 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:45.438077 1542350 cri.go:89] found id: ""
	I1213 16:13:45.438104 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.438112 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:45.438119 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:45.438178 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:45.467617 1542350 cri.go:89] found id: ""
	I1213 16:13:45.467645 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.467654 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:45.467660 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:45.467725 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:45.496715 1542350 cri.go:89] found id: ""
	I1213 16:13:45.496741 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.496750 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:45.496760 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:45.496771 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:45.522438 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:45.522475 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:45.554662 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:45.554691 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:45.614193 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:45.614275 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:45.631794 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:45.631875 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:45.701179 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:45.692225    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.692953    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.694656    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.695386    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.697149    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:45.692225    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.692953    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.694656    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.695386    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.697149    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:48.201848 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:48.212860 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:48.212934 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:48.241802 1542350 cri.go:89] found id: ""
	I1213 16:13:48.241830 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.241838 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:48.241845 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:48.241908 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:48.270100 1542350 cri.go:89] found id: ""
	I1213 16:13:48.270128 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.270137 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:48.270143 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:48.270207 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:48.295048 1542350 cri.go:89] found id: ""
	I1213 16:13:48.295073 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.295081 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:48.295087 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:48.295150 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:48.320949 1542350 cri.go:89] found id: ""
	I1213 16:13:48.320974 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.320983 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:48.320989 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:48.321048 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:48.357548 1542350 cri.go:89] found id: ""
	I1213 16:13:48.357572 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.357580 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:48.357586 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:48.357646 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:48.395642 1542350 cri.go:89] found id: ""
	I1213 16:13:48.395676 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.395685 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:48.395692 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:48.395761 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:48.426584 1542350 cri.go:89] found id: ""
	I1213 16:13:48.426611 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.426620 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:48.426626 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:48.426687 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:48.451854 1542350 cri.go:89] found id: ""
	I1213 16:13:48.451890 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.451899 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:48.451923 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:48.451938 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:48.508044 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:48.508086 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:48.523941 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:48.523971 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:48.594870 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:48.579201    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.579927    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.583793    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.584114    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.585609    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:48.579201    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.579927    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.583793    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.584114    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.585609    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:48.594893 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:48.594906 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:48.621999 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:48.622078 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:51.156024 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:51.167178 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:51.167252 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:51.198661 1542350 cri.go:89] found id: ""
	I1213 16:13:51.198684 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.198692 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:51.198699 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:51.198757 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:51.224046 1542350 cri.go:89] found id: ""
	I1213 16:13:51.224069 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.224077 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:51.224083 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:51.224149 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:51.253035 1542350 cri.go:89] found id: ""
	I1213 16:13:51.253062 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.253070 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:51.253076 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:51.253164 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:51.278917 1542350 cri.go:89] found id: ""
	I1213 16:13:51.278943 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.278952 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:51.278958 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:51.279016 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:51.305382 1542350 cri.go:89] found id: ""
	I1213 16:13:51.305405 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.305413 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:51.305419 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:51.305480 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:51.329703 1542350 cri.go:89] found id: ""
	I1213 16:13:51.329726 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.329735 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:51.329741 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:51.329800 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:51.359740 1542350 cri.go:89] found id: ""
	I1213 16:13:51.359762 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.359770 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:51.359776 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:51.359840 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:51.386446 1542350 cri.go:89] found id: ""
	I1213 16:13:51.386522 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.386544 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:51.386566 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:51.386589 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:51.412669 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:51.412707 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:51.453745 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:51.453775 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:51.511660 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:51.511698 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:51.527994 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:51.528025 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:51.595021 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:51.583262    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.584038    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.585894    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.586556    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.588263    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:51.583262    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.584038    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.585894    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.586556    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.588263    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:54.096158 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:54.107425 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:54.107512 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:54.138865 1542350 cri.go:89] found id: ""
	I1213 16:13:54.138891 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.138899 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:54.138905 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:54.138966 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:54.164096 1542350 cri.go:89] found id: ""
	I1213 16:13:54.164121 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.164130 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:54.164135 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:54.164195 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:54.193309 1542350 cri.go:89] found id: ""
	I1213 16:13:54.193335 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.193345 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:54.193352 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:54.193416 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:54.219468 1542350 cri.go:89] found id: ""
	I1213 16:13:54.219490 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.219499 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:54.219520 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:54.219589 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:54.244935 1542350 cri.go:89] found id: ""
	I1213 16:13:54.244962 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.244971 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:54.244977 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:54.245038 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:54.274445 1542350 cri.go:89] found id: ""
	I1213 16:13:54.274472 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.274481 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:54.274488 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:54.274554 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:54.304121 1542350 cri.go:89] found id: ""
	I1213 16:13:54.304146 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.304154 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:54.304160 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:54.304217 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:54.329301 1542350 cri.go:89] found id: ""
	I1213 16:13:54.329327 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.329335 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:54.329350 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:54.329362 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:54.357962 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:54.358003 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:54.393726 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:54.393753 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:54.454879 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:54.454917 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:54.471046 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:54.471122 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:54.539675 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:54.530749    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.531242    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.532726    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.533202    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.535706    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:54.530749    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.531242    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.532726    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.533202    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.535706    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:57.040543 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:57.051825 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:57.051902 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:57.080948 1542350 cri.go:89] found id: ""
	I1213 16:13:57.080975 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.080984 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:57.080990 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:57.081060 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:57.106564 1542350 cri.go:89] found id: ""
	I1213 16:13:57.106592 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.106602 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:57.106609 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:57.106674 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:57.132305 1542350 cri.go:89] found id: ""
	I1213 16:13:57.132332 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.132341 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:57.132347 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:57.132415 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:57.161893 1542350 cri.go:89] found id: ""
	I1213 16:13:57.161919 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.161928 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:57.161934 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:57.161996 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:57.187018 1542350 cri.go:89] found id: ""
	I1213 16:13:57.187042 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.187051 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:57.187057 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:57.187118 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:57.213450 1542350 cri.go:89] found id: ""
	I1213 16:13:57.213477 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.213486 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:57.213493 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:57.213598 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:57.239773 1542350 cri.go:89] found id: ""
	I1213 16:13:57.239799 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.239808 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:57.239814 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:57.239875 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:57.268874 1542350 cri.go:89] found id: ""
	I1213 16:13:57.268901 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.268910 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:57.268920 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:57.268932 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:57.325438 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:57.325478 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:57.345255 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:57.345288 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:57.419796 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:57.411216    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.412003    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.413617    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.414124    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.415814    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:57.411216    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.412003    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.413617    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.414124    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.415814    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:57.419818 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:57.419830 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:57.445711 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:57.445753 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:58.303454 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:13:58.370450 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:13:58.370563 1542350 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 16:13:58.485061 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:13:58.547882 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:13:58.547990 1542350 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 16:13:59.973778 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:59.984749 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:59.984822 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:00.047691 1542350 cri.go:89] found id: ""
	I1213 16:14:00.047719 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.047729 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:00.047735 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:00.047812 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:00.172004 1542350 cri.go:89] found id: ""
	I1213 16:14:00.172032 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.172042 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:00.172048 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:00.172124 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:00.225264 1542350 cri.go:89] found id: ""
	I1213 16:14:00.225417 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.225430 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:00.225441 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:00.225515 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:00.291798 1542350 cri.go:89] found id: ""
	I1213 16:14:00.291826 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.291837 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:00.291843 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:00.291915 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:00.322720 1542350 cri.go:89] found id: ""
	I1213 16:14:00.322775 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.322785 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:00.322802 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:00.322965 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:00.382229 1542350 cri.go:89] found id: ""
	I1213 16:14:00.382259 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.382268 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:00.382276 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:00.382353 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:00.428076 1542350 cri.go:89] found id: ""
	I1213 16:14:00.428104 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.428114 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:00.428122 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:00.428188 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:00.456283 1542350 cri.go:89] found id: ""
	I1213 16:14:00.456313 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.456322 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:00.456334 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:00.456347 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:00.487074 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:00.487103 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:00.543060 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:00.543096 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:00.559570 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:00.559599 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:00.643362 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:00.632447    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.633406    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.637299    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.637614    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.639111    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:00.632447    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.633406    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.637299    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.637614    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.639111    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:00.643385 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:00.643398 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:03.169712 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:03.180422 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:03.180498 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:03.204986 1542350 cri.go:89] found id: ""
	I1213 16:14:03.205052 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.205078 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:03.205091 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:03.205167 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:03.229548 1542350 cri.go:89] found id: ""
	I1213 16:14:03.229624 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.229648 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:03.229667 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:03.229759 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:03.255379 1542350 cri.go:89] found id: ""
	I1213 16:14:03.255401 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.255410 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:03.255415 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:03.255474 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:03.281492 1542350 cri.go:89] found id: ""
	I1213 16:14:03.281516 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.281526 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:03.281532 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:03.281594 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:03.309687 1542350 cri.go:89] found id: ""
	I1213 16:14:03.309709 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.309717 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:03.309723 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:03.309781 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:03.342064 1542350 cri.go:89] found id: ""
	I1213 16:14:03.342088 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.342097 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:03.342104 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:03.342166 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:03.374355 1542350 cri.go:89] found id: ""
	I1213 16:14:03.374427 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.374449 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:03.374468 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:03.374551 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:03.402300 1542350 cri.go:89] found id: ""
	I1213 16:14:03.402373 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.402397 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:03.402419 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:03.402454 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:03.419291 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:03.419341 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:03.488415 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:03.479265    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.480042    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.481961    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.482530    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.483778    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:03.479265    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.480042    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.481961    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.482530    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.483778    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:03.488438 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:03.488450 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:03.513548 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:03.513583 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:03.541410 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:03.541438 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:06.098537 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:06.109444 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:06.109517 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:06.135738 1542350 cri.go:89] found id: ""
	I1213 16:14:06.135763 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.135772 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:06.135778 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:06.135838 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:06.164881 1542350 cri.go:89] found id: ""
	I1213 16:14:06.164907 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.164915 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:06.164921 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:06.165006 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:06.190132 1542350 cri.go:89] found id: ""
	I1213 16:14:06.190157 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.190166 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:06.190172 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:06.190237 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:06.214554 1542350 cri.go:89] found id: ""
	I1213 16:14:06.214588 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.214603 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:06.214610 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:06.214678 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:06.239546 1542350 cri.go:89] found id: ""
	I1213 16:14:06.239573 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.239582 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:06.239588 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:06.239675 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:06.265195 1542350 cri.go:89] found id: ""
	I1213 16:14:06.265223 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.265231 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:06.265237 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:06.265308 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:06.289926 1542350 cri.go:89] found id: ""
	I1213 16:14:06.289960 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.289969 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:06.289991 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:06.290071 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:06.314603 1542350 cri.go:89] found id: ""
	I1213 16:14:06.314629 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.314637 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:06.314647 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:06.314683 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:06.371177 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:06.371258 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:06.393856 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:06.393930 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:06.459001 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:06.450168    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.450823    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.452646    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.453140    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.454779    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:06.450168    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.450823    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.452646    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.453140    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.454779    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:06.459025 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:06.459038 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:06.484151 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:06.484188 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:09.017168 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:09.028196 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:09.028273 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:09.056958 1542350 cri.go:89] found id: ""
	I1213 16:14:09.056983 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.056991 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:09.056997 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:09.057056 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:09.081528 1542350 cri.go:89] found id: ""
	I1213 16:14:09.081554 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.081562 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:09.081568 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:09.081625 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:09.106979 1542350 cri.go:89] found id: ""
	I1213 16:14:09.107006 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.107015 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:09.107022 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:09.107082 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:09.131992 1542350 cri.go:89] found id: ""
	I1213 16:14:09.132014 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.132022 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:09.132031 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:09.132090 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:09.159379 1542350 cri.go:89] found id: ""
	I1213 16:14:09.159403 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.159411 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:09.159417 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:09.159475 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:09.188125 1542350 cri.go:89] found id: ""
	I1213 16:14:09.188148 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.188157 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:09.188163 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:09.188223 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:09.213724 1542350 cri.go:89] found id: ""
	I1213 16:14:09.213746 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.213755 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:09.213762 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:09.213820 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:09.239228 1542350 cri.go:89] found id: ""
	I1213 16:14:09.239250 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.239258 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:09.239269 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:09.239280 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:09.264873 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:09.264908 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:09.297705 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:09.297733 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:09.356080 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:09.356130 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:09.376099 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:09.376130 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:09.447156 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:09.438767    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.439139    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.440687    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.441254    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.442932    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:09.438767    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.439139    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.440687    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.441254    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.442932    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:11.948214 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:11.961565 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:11.961686 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:11.989927 1542350 cri.go:89] found id: ""
	I1213 16:14:11.989978 1542350 logs.go:282] 0 containers: []
	W1213 16:14:11.989988 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:11.989997 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:11.990074 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:12.015827 1542350 cri.go:89] found id: ""
	I1213 16:14:12.015853 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.015863 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:12.015869 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:12.015931 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:12.043024 1542350 cri.go:89] found id: ""
	I1213 16:14:12.043052 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.043061 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:12.043067 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:12.043129 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:12.068348 1542350 cri.go:89] found id: ""
	I1213 16:14:12.068376 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.068385 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:12.068390 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:12.068450 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:12.097740 1542350 cri.go:89] found id: ""
	I1213 16:14:12.097774 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.097783 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:12.097790 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:12.097858 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:12.121723 1542350 cri.go:89] found id: ""
	I1213 16:14:12.121755 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.121764 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:12.121770 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:12.121842 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:12.150786 1542350 cri.go:89] found id: ""
	I1213 16:14:12.150813 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.150821 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:12.150827 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:12.150892 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:12.175342 1542350 cri.go:89] found id: ""
	I1213 16:14:12.175367 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.175376 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:12.175386 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:12.175404 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:12.231019 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:12.231066 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:12.247225 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:12.247257 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:12.311535 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:12.303778    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.304209    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.305700    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.306034    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.307505    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:12.303778    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.304209    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.305700    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.306034    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.307505    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:12.311562 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:12.311575 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:12.336385 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:12.336419 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:14.871456 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:14.883637 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:14.883706 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:14.912506 1542350 cri.go:89] found id: ""
	I1213 16:14:14.912530 1542350 logs.go:282] 0 containers: []
	W1213 16:14:14.912539 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:14.912545 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:14.912612 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:14.924965 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:14:14.948875 1542350 cri.go:89] found id: ""
	I1213 16:14:14.948908 1542350 logs.go:282] 0 containers: []
	W1213 16:14:14.948917 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:14.948923 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:14.948983 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	W1213 16:14:15.004427 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:14:15.004545 1542350 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 16:14:15.004879 1542350 cri.go:89] found id: ""
	I1213 16:14:15.004917 1542350 logs.go:282] 0 containers: []
	W1213 16:14:15.005050 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:15.005059 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:15.005129 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:15.016719 1542350 out.go:179] * Enabled addons: 
	I1213 16:14:15.019727 1542350 addons.go:530] duration metric: took 1m53.607875831s for enable addons: enabled=[]
	I1213 16:14:15.061323 1542350 cri.go:89] found id: ""
	I1213 16:14:15.061351 1542350 logs.go:282] 0 containers: []
	W1213 16:14:15.061359 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:15.061366 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:15.061431 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:15.089262 1542350 cri.go:89] found id: ""
	I1213 16:14:15.089290 1542350 logs.go:282] 0 containers: []
	W1213 16:14:15.089310 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:15.089351 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:15.089416 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:15.114964 1542350 cri.go:89] found id: ""
	I1213 16:14:15.114992 1542350 logs.go:282] 0 containers: []
	W1213 16:14:15.115001 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:15.115010 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:15.115087 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:15.150205 1542350 cri.go:89] found id: ""
	I1213 16:14:15.150228 1542350 logs.go:282] 0 containers: []
	W1213 16:14:15.150237 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:15.150243 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:15.150305 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:15.179096 1542350 cri.go:89] found id: ""
	I1213 16:14:15.179124 1542350 logs.go:282] 0 containers: []
	W1213 16:14:15.179159 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:15.179170 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:15.179186 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:15.240671 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:15.240716 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:15.257989 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:15.258020 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:15.327105 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:15.316562    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.317280    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.321065    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.321732    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.323049    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:15.316562    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.317280    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.321065    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.321732    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.323049    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:15.327125 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:15.327139 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:15.356556 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:15.356601 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:17.895435 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:17.906103 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:17.906178 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:17.934229 1542350 cri.go:89] found id: ""
	I1213 16:14:17.934255 1542350 logs.go:282] 0 containers: []
	W1213 16:14:17.934263 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:17.934270 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:17.934329 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:17.960923 1542350 cri.go:89] found id: ""
	I1213 16:14:17.960947 1542350 logs.go:282] 0 containers: []
	W1213 16:14:17.960955 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:17.960980 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:17.961039 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:17.986062 1542350 cri.go:89] found id: ""
	I1213 16:14:17.986096 1542350 logs.go:282] 0 containers: []
	W1213 16:14:17.986105 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:17.986111 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:17.986180 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:18.019636 1542350 cri.go:89] found id: ""
	I1213 16:14:18.019718 1542350 logs.go:282] 0 containers: []
	W1213 16:14:18.019741 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:18.019761 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:18.019858 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:18.046719 1542350 cri.go:89] found id: ""
	I1213 16:14:18.046787 1542350 logs.go:282] 0 containers: []
	W1213 16:14:18.046810 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:18.046829 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:18.046924 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:18.073562 1542350 cri.go:89] found id: ""
	I1213 16:14:18.073641 1542350 logs.go:282] 0 containers: []
	W1213 16:14:18.073665 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:18.073685 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:18.073763 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:18.100968 1542350 cri.go:89] found id: ""
	I1213 16:14:18.101005 1542350 logs.go:282] 0 containers: []
	W1213 16:14:18.101014 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:18.101021 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:18.101086 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:18.127366 1542350 cri.go:89] found id: ""
	I1213 16:14:18.127391 1542350 logs.go:282] 0 containers: []
	W1213 16:14:18.127401 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:18.127410 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:18.127422 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:18.160263 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:18.160289 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:18.217033 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:18.217066 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:18.234115 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:18.234146 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:18.301091 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:18.292902    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.293484    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.295231    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.295655    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.297297    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:18.292902    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.293484    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.295231    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.295655    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.297297    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:18.301112 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:18.301126 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:20.828738 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:20.843249 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:20.843356 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:20.878301 1542350 cri.go:89] found id: ""
	I1213 16:14:20.878326 1542350 logs.go:282] 0 containers: []
	W1213 16:14:20.878335 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:20.878341 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:20.878400 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:20.911841 1542350 cri.go:89] found id: ""
	I1213 16:14:20.911863 1542350 logs.go:282] 0 containers: []
	W1213 16:14:20.911872 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:20.911877 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:20.911937 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:20.938802 1542350 cri.go:89] found id: ""
	I1213 16:14:20.938825 1542350 logs.go:282] 0 containers: []
	W1213 16:14:20.938833 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:20.938839 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:20.938895 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:20.963358 1542350 cri.go:89] found id: ""
	I1213 16:14:20.963382 1542350 logs.go:282] 0 containers: []
	W1213 16:14:20.963395 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:20.963402 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:20.963462 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:20.988428 1542350 cri.go:89] found id: ""
	I1213 16:14:20.988500 1542350 logs.go:282] 0 containers: []
	W1213 16:14:20.988516 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:20.988523 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:20.988586 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:21.015053 1542350 cri.go:89] found id: ""
	I1213 16:14:21.015088 1542350 logs.go:282] 0 containers: []
	W1213 16:14:21.015097 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:21.015104 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:21.015168 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:21.041720 1542350 cri.go:89] found id: ""
	I1213 16:14:21.041747 1542350 logs.go:282] 0 containers: []
	W1213 16:14:21.041761 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:21.041767 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:21.041844 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:21.066333 1542350 cri.go:89] found id: ""
	I1213 16:14:21.066358 1542350 logs.go:282] 0 containers: []
	W1213 16:14:21.066367 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:21.066376 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:21.066390 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:21.092074 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:21.092113 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:21.119921 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:21.119949 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:21.175737 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:21.175772 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:21.192772 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:21.192802 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:21.258320 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:21.248712    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.249237    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.250941    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.251593    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.253749    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:21.248712    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.249237    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.250941    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.251593    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.253749    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:23.760202 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:23.770818 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:23.770889 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:23.797015 1542350 cri.go:89] found id: ""
	I1213 16:14:23.797038 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.797047 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:23.797053 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:23.797113 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:23.822062 1542350 cri.go:89] found id: ""
	I1213 16:14:23.822085 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.822093 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:23.822100 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:23.822158 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:23.874192 1542350 cri.go:89] found id: ""
	I1213 16:14:23.874214 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.874223 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:23.874229 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:23.874286 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:23.900200 1542350 cri.go:89] found id: ""
	I1213 16:14:23.900221 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.900230 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:23.900236 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:23.900296 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:23.926269 1542350 cri.go:89] found id: ""
	I1213 16:14:23.926298 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.926306 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:23.926313 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:23.926373 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:23.953863 1542350 cri.go:89] found id: ""
	I1213 16:14:23.953893 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.953902 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:23.953909 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:23.953978 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:23.978285 1542350 cri.go:89] found id: ""
	I1213 16:14:23.978314 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.978323 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:23.978332 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:23.978392 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:24.004367 1542350 cri.go:89] found id: ""
	I1213 16:14:24.004397 1542350 logs.go:282] 0 containers: []
	W1213 16:14:24.004407 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:24.004418 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:24.004433 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:24.038684 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:24.038715 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:24.093699 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:24.093736 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:24.109888 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:24.109958 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:24.176373 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:24.167217    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.168183    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.169904    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.170492    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.172210    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:24.167217    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.168183    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.169904    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.170492    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.172210    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:24.176410 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:24.176423 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:26.703702 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:26.715414 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:26.715505 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:26.741617 1542350 cri.go:89] found id: ""
	I1213 16:14:26.741644 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.741653 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:26.741660 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:26.741725 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:26.773142 1542350 cri.go:89] found id: ""
	I1213 16:14:26.773166 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.773175 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:26.773180 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:26.773248 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:26.800698 1542350 cri.go:89] found id: ""
	I1213 16:14:26.800770 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.800792 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:26.800812 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:26.800916 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:26.826188 1542350 cri.go:89] found id: ""
	I1213 16:14:26.826213 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.826222 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:26.826228 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:26.826290 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:26.858537 1542350 cri.go:89] found id: ""
	I1213 16:14:26.858564 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.858573 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:26.858579 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:26.858644 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:26.893373 1542350 cri.go:89] found id: ""
	I1213 16:14:26.893401 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.893411 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:26.893417 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:26.893491 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:26.924977 1542350 cri.go:89] found id: ""
	I1213 16:14:26.925004 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.925013 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:26.925019 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:26.925080 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:26.949933 1542350 cri.go:89] found id: ""
	I1213 16:14:26.949962 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.949971 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:26.949980 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:26.949997 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:26.980349 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:26.980380 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:27.038924 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:27.038960 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:27.055463 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:27.055494 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:27.125589 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:27.116928    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.117440    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.119173    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.119547    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.121058    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:27.116928    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.117440    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.119173    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.119547    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.121058    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:27.125608 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:27.125624 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:29.652560 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:29.663991 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:29.664080 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:29.692800 1542350 cri.go:89] found id: ""
	I1213 16:14:29.692825 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.692834 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:29.692841 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:29.692908 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:29.724553 1542350 cri.go:89] found id: ""
	I1213 16:14:29.724585 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.724595 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:29.724603 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:29.724665 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:29.750391 1542350 cri.go:89] found id: ""
	I1213 16:14:29.750460 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.750484 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:29.750502 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:29.750593 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:29.774900 1542350 cri.go:89] found id: ""
	I1213 16:14:29.774968 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.774994 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:29.775012 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:29.775104 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:29.800460 1542350 cri.go:89] found id: ""
	I1213 16:14:29.800503 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.800512 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:29.800518 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:29.800581 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:29.825184 1542350 cri.go:89] found id: ""
	I1213 16:14:29.825261 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.825285 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:29.825305 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:29.825391 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:29.857574 1542350 cri.go:89] found id: ""
	I1213 16:14:29.857604 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.857613 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:29.857619 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:29.857681 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:29.886573 1542350 cri.go:89] found id: ""
	I1213 16:14:29.886602 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.886610 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:29.886620 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:29.886636 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:29.954547 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:29.945729    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.946349    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.948212    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.948764    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.950488    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:29.945729    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.946349    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.948212    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.948764    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.950488    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:29.954614 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:29.954636 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:29.980281 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:29.980318 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:30.020553 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:30.020640 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:30.112248 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:30.112288 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:32.632543 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:32.644615 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:32.644739 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:32.671076 1542350 cri.go:89] found id: ""
	I1213 16:14:32.671103 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.671115 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:32.671124 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:32.671204 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:32.705219 1542350 cri.go:89] found id: ""
	I1213 16:14:32.705245 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.705255 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:32.705264 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:32.705345 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:32.734663 1542350 cri.go:89] found id: ""
	I1213 16:14:32.734764 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.734796 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:32.734826 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:32.734911 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:32.763416 1542350 cri.go:89] found id: ""
	I1213 16:14:32.763441 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.763451 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:32.763457 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:32.763519 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:32.790404 1542350 cri.go:89] found id: ""
	I1213 16:14:32.790478 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.790500 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:32.790519 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:32.790638 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:32.818613 1542350 cri.go:89] found id: ""
	I1213 16:14:32.818699 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.818735 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:32.818773 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:32.818908 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:32.850999 1542350 cri.go:89] found id: ""
	I1213 16:14:32.851029 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.851038 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:32.851050 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:32.851113 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:32.883800 1542350 cri.go:89] found id: ""
	I1213 16:14:32.883828 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.883837 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:32.883846 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:32.883857 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:32.950061 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:32.950111 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:32.967586 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:32.967617 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:33.038320 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:33.029200    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.030090    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.031863    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.032322    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.033913    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:33.029200    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.030090    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.031863    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.032322    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.033913    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:33.038342 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:33.038357 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:33.066098 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:33.066154 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:35.607481 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:35.619526 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:35.619589 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:35.646097 1542350 cri.go:89] found id: ""
	I1213 16:14:35.646120 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.646131 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:35.646137 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:35.646197 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:35.671288 1542350 cri.go:89] found id: ""
	I1213 16:14:35.671349 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.671358 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:35.671364 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:35.671428 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:35.696891 1542350 cri.go:89] found id: ""
	I1213 16:14:35.696915 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.696923 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:35.696930 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:35.696990 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:35.722027 1542350 cri.go:89] found id: ""
	I1213 16:14:35.722049 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.722057 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:35.722063 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:35.722120 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:35.746428 1542350 cri.go:89] found id: ""
	I1213 16:14:35.746450 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.746458 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:35.746465 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:35.746521 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:35.771433 1542350 cri.go:89] found id: ""
	I1213 16:14:35.771456 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.771465 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:35.771471 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:35.771527 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:35.795226 1542350 cri.go:89] found id: ""
	I1213 16:14:35.795292 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.795408 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:35.795422 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:35.795494 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:35.819205 1542350 cri.go:89] found id: ""
	I1213 16:14:35.819237 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.819246 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:35.819256 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:35.819268 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:35.856667 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:35.856698 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:35.921282 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:35.921317 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:35.937351 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:35.937379 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:36.013024 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:35.998154    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:35.998523    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:36.000027    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:36.000325    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:36.001562    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:35.998154    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:35.998523    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:36.000027    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:36.000325    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:36.001562    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:36.013050 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:36.013065 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:38.540010 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:38.553894 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:38.553969 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:38.587080 1542350 cri.go:89] found id: ""
	I1213 16:14:38.587102 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.587110 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:38.587116 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:38.587180 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:38.615796 1542350 cri.go:89] found id: ""
	I1213 16:14:38.615820 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.615829 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:38.615835 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:38.615895 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:38.652609 1542350 cri.go:89] found id: ""
	I1213 16:14:38.652634 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.652643 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:38.652649 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:38.652706 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:38.681712 1542350 cri.go:89] found id: ""
	I1213 16:14:38.681738 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.681747 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:38.681753 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:38.681812 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:38.707047 1542350 cri.go:89] found id: ""
	I1213 16:14:38.707076 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.707085 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:38.707091 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:38.707154 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:38.731834 1542350 cri.go:89] found id: ""
	I1213 16:14:38.731868 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.731878 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:38.731884 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:38.731951 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:38.755752 1542350 cri.go:89] found id: ""
	I1213 16:14:38.755816 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.755838 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:38.755855 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:38.755940 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:38.780290 1542350 cri.go:89] found id: ""
	I1213 16:14:38.780316 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.780325 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:38.780335 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:38.780354 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:38.837581 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:38.837613 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:38.855100 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:38.855130 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:38.927088 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:38.917976    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.918723    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.920509    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.921087    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.922821    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:38.917976    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.918723    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.920509    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.921087    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.922821    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:38.927155 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:38.927178 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:38.952089 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:38.952127 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:41.483644 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:41.494493 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:41.494574 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:41.518966 1542350 cri.go:89] found id: ""
	I1213 16:14:41.518988 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.518996 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:41.519002 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:41.519066 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:41.545695 1542350 cri.go:89] found id: ""
	I1213 16:14:41.545720 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.545729 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:41.545734 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:41.545798 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:41.571565 1542350 cri.go:89] found id: ""
	I1213 16:14:41.571591 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.571600 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:41.571606 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:41.571673 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:41.619450 1542350 cri.go:89] found id: ""
	I1213 16:14:41.619473 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.619482 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:41.619488 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:41.619548 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:41.653736 1542350 cri.go:89] found id: ""
	I1213 16:14:41.653757 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.653766 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:41.653773 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:41.653835 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:41.682235 1542350 cri.go:89] found id: ""
	I1213 16:14:41.682257 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.682265 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:41.682272 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:41.682332 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:41.708453 1542350 cri.go:89] found id: ""
	I1213 16:14:41.708475 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.708489 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:41.708496 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:41.708554 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:41.737148 1542350 cri.go:89] found id: ""
	I1213 16:14:41.737171 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.737179 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:41.737193 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:41.737205 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:41.792082 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:41.792120 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:41.808566 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:41.808597 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:41.888202 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:41.877628    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.878620    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.880407    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.881012    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.883935    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:41.877628    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.878620    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.880407    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.881012    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.883935    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:41.888226 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:41.888238 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:41.913429 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:41.913466 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:44.445881 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:44.456550 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:44.456627 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:44.482008 1542350 cri.go:89] found id: ""
	I1213 16:14:44.482031 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.482039 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:44.482045 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:44.482103 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:44.507630 1542350 cri.go:89] found id: ""
	I1213 16:14:44.507654 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.507662 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:44.507668 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:44.507729 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:44.536680 1542350 cri.go:89] found id: ""
	I1213 16:14:44.536704 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.536713 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:44.536719 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:44.536778 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:44.565166 1542350 cri.go:89] found id: ""
	I1213 16:14:44.565189 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.565199 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:44.565205 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:44.565265 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:44.598174 1542350 cri.go:89] found id: ""
	I1213 16:14:44.598197 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.598206 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:44.598214 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:44.598280 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:44.640061 1542350 cri.go:89] found id: ""
	I1213 16:14:44.640084 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.640092 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:44.640099 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:44.640159 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:44.671940 1542350 cri.go:89] found id: ""
	I1213 16:14:44.671968 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.671976 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:44.671982 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:44.672044 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:44.698885 1542350 cri.go:89] found id: ""
	I1213 16:14:44.698907 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.698916 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:44.698925 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:44.698939 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:44.715019 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:44.715090 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:44.777959 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:44.769893    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.770508    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.772034    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.772476    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.773993    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:44.769893    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.770508    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.772034    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.772476    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.773993    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:44.777983 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:44.777996 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:44.803994 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:44.804031 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:44.835446 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:44.835476 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:47.402282 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:47.413184 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:47.413252 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:47.439678 1542350 cri.go:89] found id: ""
	I1213 16:14:47.439702 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.439710 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:47.439717 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:47.439777 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:47.469694 1542350 cri.go:89] found id: ""
	I1213 16:14:47.469720 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.469728 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:47.469734 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:47.469797 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:47.495280 1542350 cri.go:89] found id: ""
	I1213 16:14:47.495306 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.495339 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:47.495346 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:47.495408 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:47.525092 1542350 cri.go:89] found id: ""
	I1213 16:14:47.525118 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.525127 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:47.525133 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:47.525194 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:47.551755 1542350 cri.go:89] found id: ""
	I1213 16:14:47.551782 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.551790 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:47.551797 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:47.551858 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:47.577368 1542350 cri.go:89] found id: ""
	I1213 16:14:47.577393 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.577402 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:47.577408 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:47.577479 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:47.603993 1542350 cri.go:89] found id: ""
	I1213 16:14:47.604016 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.604024 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:47.604030 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:47.604095 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:47.634166 1542350 cri.go:89] found id: ""
	I1213 16:14:47.634188 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.634197 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:47.634206 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:47.634217 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:47.698875 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:47.698911 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:47.715548 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:47.715580 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:47.783485 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:47.774772    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.775432    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.777207    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.777881    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.779547    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:47.774772    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.775432    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.777207    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.777881    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.779547    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:47.783508 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:47.783521 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:47.809639 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:47.809672 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:50.342353 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:50.355175 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:50.355303 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:50.381034 1542350 cri.go:89] found id: ""
	I1213 16:14:50.381066 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.381076 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:50.381084 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:50.381166 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:50.409181 1542350 cri.go:89] found id: ""
	I1213 16:14:50.409208 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.409217 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:50.409222 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:50.409286 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:50.438419 1542350 cri.go:89] found id: ""
	I1213 16:14:50.438451 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.438460 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:50.438466 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:50.438525 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:50.468687 1542350 cri.go:89] found id: ""
	I1213 16:14:50.468713 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.468721 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:50.468728 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:50.468789 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:50.498096 1542350 cri.go:89] found id: ""
	I1213 16:14:50.498163 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.498187 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:50.498205 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:50.498292 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:50.523754 1542350 cri.go:89] found id: ""
	I1213 16:14:50.523820 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.523835 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:50.523843 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:50.523902 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:50.555302 1542350 cri.go:89] found id: ""
	I1213 16:14:50.555387 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.555403 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:50.555410 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:50.555477 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:50.581005 1542350 cri.go:89] found id: ""
	I1213 16:14:50.581035 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.581044 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:50.581054 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:50.581067 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:50.611931 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:50.612005 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:50.650728 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:50.650754 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:50.709840 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:50.709878 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:50.729613 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:50.729711 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:50.796424 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:50.788003    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.788585    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.790191    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.790714    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.792323    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:50.788003    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.788585    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.790191    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.790714    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.792323    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:53.298328 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:53.309106 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:53.309178 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:53.333481 1542350 cri.go:89] found id: ""
	I1213 16:14:53.333513 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.333523 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:53.333529 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:53.333590 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:53.358898 1542350 cri.go:89] found id: ""
	I1213 16:14:53.358923 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.358932 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:53.358938 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:53.358999 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:53.384286 1542350 cri.go:89] found id: ""
	I1213 16:14:53.384311 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.384322 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:53.384329 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:53.384388 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:53.408999 1542350 cri.go:89] found id: ""
	I1213 16:14:53.409022 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.409031 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:53.409037 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:53.409102 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:53.437666 1542350 cri.go:89] found id: ""
	I1213 16:14:53.437688 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.437696 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:53.437703 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:53.437764 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:53.462775 1542350 cri.go:89] found id: ""
	I1213 16:14:53.462868 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.462885 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:53.462893 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:53.462955 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:53.489379 1542350 cri.go:89] found id: ""
	I1213 16:14:53.489403 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.489413 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:53.489419 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:53.489479 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:53.513660 1542350 cri.go:89] found id: ""
	I1213 16:14:53.513683 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.513691 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:53.513701 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:53.513711 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:53.544644 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:53.544670 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:53.603653 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:53.603733 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:53.620761 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:53.620846 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:53.694809 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:53.685787    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.686311    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.688155    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.688956    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.690579    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:53.685787    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.686311    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.688155    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.688956    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.690579    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:53.694871 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:53.694886 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:56.222442 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:56.233418 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:56.233521 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:56.262552 1542350 cri.go:89] found id: ""
	I1213 16:14:56.262578 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.262587 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:56.262594 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:56.262677 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:56.290583 1542350 cri.go:89] found id: ""
	I1213 16:14:56.290611 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.290620 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:56.290627 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:56.290778 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:56.316264 1542350 cri.go:89] found id: ""
	I1213 16:14:56.316292 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.316300 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:56.316306 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:56.316366 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:56.341047 1542350 cri.go:89] found id: ""
	I1213 16:14:56.341072 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.341080 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:56.341086 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:56.341163 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:56.369874 1542350 cri.go:89] found id: ""
	I1213 16:14:56.369909 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.369918 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:56.369924 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:56.369993 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:56.396373 1542350 cri.go:89] found id: ""
	I1213 16:14:56.396400 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.396408 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:56.396415 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:56.396480 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:56.421264 1542350 cri.go:89] found id: ""
	I1213 16:14:56.421286 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.421294 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:56.421300 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:56.421362 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:56.449683 1542350 cri.go:89] found id: ""
	I1213 16:14:56.449708 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.449717 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:56.449727 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:56.449740 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:56.513612 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:56.505286    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.506108    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.507846    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.508199    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.509740    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:56.505286    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.506108    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.507846    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.508199    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.509740    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:56.513635 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:56.513648 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:56.539159 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:56.539193 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:56.569885 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:56.569913 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:56.636667 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:56.636712 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:59.161215 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:59.172070 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:59.172139 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:59.196977 1542350 cri.go:89] found id: ""
	I1213 16:14:59.197003 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.197013 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:59.197019 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:59.197124 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:59.222813 1542350 cri.go:89] found id: ""
	I1213 16:14:59.222839 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.222849 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:59.222855 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:59.222921 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:59.249285 1542350 cri.go:89] found id: ""
	I1213 16:14:59.249309 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.249317 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:59.249323 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:59.249385 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:59.275052 1542350 cri.go:89] found id: ""
	I1213 16:14:59.275076 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.275085 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:59.275091 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:59.275152 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:59.301297 1542350 cri.go:89] found id: ""
	I1213 16:14:59.301323 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.301331 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:59.301337 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:59.301395 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:59.326556 1542350 cri.go:89] found id: ""
	I1213 16:14:59.326582 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.326591 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:59.326599 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:59.326658 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:59.360044 1542350 cri.go:89] found id: ""
	I1213 16:14:59.360070 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.360079 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:59.360085 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:59.360145 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:59.385355 1542350 cri.go:89] found id: ""
	I1213 16:14:59.385380 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.385389 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:59.385398 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:59.385410 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:59.441005 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:59.441040 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:59.456936 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:59.456968 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:59.523389 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:59.514843    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.515544    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.517163    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.517739    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.519447    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:59.514843    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.515544    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.517163    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.517739    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.519447    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:59.523410 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:59.523423 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:59.548680 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:59.548717 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:02.077266 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:02.091997 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:02.092082 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:02.125051 1542350 cri.go:89] found id: ""
	I1213 16:15:02.125079 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.125088 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:02.125095 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:02.125158 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:02.155518 1542350 cri.go:89] found id: ""
	I1213 16:15:02.155547 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.155555 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:02.155567 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:02.155626 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:02.180408 1542350 cri.go:89] found id: ""
	I1213 16:15:02.180435 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.180444 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:02.180450 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:02.180541 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:02.206923 1542350 cri.go:89] found id: ""
	I1213 16:15:02.206957 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.206966 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:02.206979 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:02.207049 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:02.234308 1542350 cri.go:89] found id: ""
	I1213 16:15:02.234332 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.234341 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:02.234347 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:02.234412 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:02.260647 1542350 cri.go:89] found id: ""
	I1213 16:15:02.260671 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.260680 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:02.260686 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:02.260746 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:02.287051 1542350 cri.go:89] found id: ""
	I1213 16:15:02.287075 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.287083 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:02.287089 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:02.287151 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:02.313703 1542350 cri.go:89] found id: ""
	I1213 16:15:02.313726 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.313734 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:02.313744 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:02.313755 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:02.369628 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:02.369663 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:02.385814 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:02.385896 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:02.450440 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:02.441569    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.442433    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.443449    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.445136    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.445445    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:02.441569    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.442433    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.443449    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.445136    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.445445    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:02.450460 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:02.450475 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:02.475994 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:02.476032 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:05.008952 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:05.023767 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:05.023852 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:05.048943 1542350 cri.go:89] found id: ""
	I1213 16:15:05.048970 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.048979 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:05.048985 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:05.049046 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:05.073030 1542350 cri.go:89] found id: ""
	I1213 16:15:05.073057 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.073066 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:05.073072 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:05.073141 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:05.113695 1542350 cri.go:89] found id: ""
	I1213 16:15:05.113724 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.113733 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:05.113739 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:05.113798 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:05.143435 1542350 cri.go:89] found id: ""
	I1213 16:15:05.143462 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.143471 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:05.143476 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:05.143533 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:05.169643 1542350 cri.go:89] found id: ""
	I1213 16:15:05.169672 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.169682 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:05.169694 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:05.169756 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:05.194836 1542350 cri.go:89] found id: ""
	I1213 16:15:05.194865 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.194874 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:05.194881 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:05.194939 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:05.223183 1542350 cri.go:89] found id: ""
	I1213 16:15:05.223208 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.223216 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:05.223223 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:05.223284 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:05.247344 1542350 cri.go:89] found id: ""
	I1213 16:15:05.247368 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.247377 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:05.247386 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:05.247400 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:05.302110 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:05.302144 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:05.318507 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:05.318537 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:05.383855 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:05.375536    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.376623    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.377525    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.378529    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.380053    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:05.375536    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.376623    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.377525    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.378529    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.380053    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:05.383878 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:05.383891 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:05.408947 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:05.408984 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:07.939749 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:07.950076 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:07.950150 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:07.975327 1542350 cri.go:89] found id: ""
	I1213 16:15:07.975351 1542350 logs.go:282] 0 containers: []
	W1213 16:15:07.975360 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:07.975366 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:07.975423 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:07.999830 1542350 cri.go:89] found id: ""
	I1213 16:15:07.999856 1542350 logs.go:282] 0 containers: []
	W1213 16:15:07.999864 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:07.999870 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:07.999928 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:08.026521 1542350 cri.go:89] found id: ""
	I1213 16:15:08.026547 1542350 logs.go:282] 0 containers: []
	W1213 16:15:08.026556 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:08.026562 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:08.026627 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:08.053320 1542350 cri.go:89] found id: ""
	I1213 16:15:08.053343 1542350 logs.go:282] 0 containers: []
	W1213 16:15:08.053352 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:08.053358 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:08.053418 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:08.084631 1542350 cri.go:89] found id: ""
	I1213 16:15:08.084654 1542350 logs.go:282] 0 containers: []
	W1213 16:15:08.084663 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:08.084669 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:08.084727 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:08.115761 1542350 cri.go:89] found id: ""
	I1213 16:15:08.115842 1542350 logs.go:282] 0 containers: []
	W1213 16:15:08.115866 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:08.115884 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:08.115992 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:08.143108 1542350 cri.go:89] found id: ""
	I1213 16:15:08.143131 1542350 logs.go:282] 0 containers: []
	W1213 16:15:08.143141 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:08.143150 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:08.143210 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:08.169485 1542350 cri.go:89] found id: ""
	I1213 16:15:08.169548 1542350 logs.go:282] 0 containers: []
	W1213 16:15:08.169571 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:08.169593 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:08.169632 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:08.186535 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:08.186608 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:08.254187 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:08.245289    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.245832    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.247630    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.248117    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.249849    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:08.245289    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.245832    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.247630    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.248117    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.249849    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:08.254252 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:08.254277 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:08.279498 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:08.279538 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:08.307012 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:08.307040 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:10.863431 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:10.875836 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:10.875902 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:10.902828 1542350 cri.go:89] found id: ""
	I1213 16:15:10.902850 1542350 logs.go:282] 0 containers: []
	W1213 16:15:10.902859 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:10.902864 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:10.902924 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:10.927709 1542350 cri.go:89] found id: ""
	I1213 16:15:10.927732 1542350 logs.go:282] 0 containers: []
	W1213 16:15:10.927741 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:10.927747 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:10.927807 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:10.952424 1542350 cri.go:89] found id: ""
	I1213 16:15:10.952448 1542350 logs.go:282] 0 containers: []
	W1213 16:15:10.952457 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:10.952466 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:10.952544 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:10.977056 1542350 cri.go:89] found id: ""
	I1213 16:15:10.977087 1542350 logs.go:282] 0 containers: []
	W1213 16:15:10.977095 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:10.977101 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:10.977163 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:11.006742 1542350 cri.go:89] found id: ""
	I1213 16:15:11.006767 1542350 logs.go:282] 0 containers: []
	W1213 16:15:11.006776 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:11.006782 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:11.006857 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:11.033448 1542350 cri.go:89] found id: ""
	I1213 16:15:11.033471 1542350 logs.go:282] 0 containers: []
	W1213 16:15:11.033481 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:11.033491 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:11.033549 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:11.058288 1542350 cri.go:89] found id: ""
	I1213 16:15:11.058319 1542350 logs.go:282] 0 containers: []
	W1213 16:15:11.058329 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:11.058335 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:11.058403 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:11.086206 1542350 cri.go:89] found id: ""
	I1213 16:15:11.086229 1542350 logs.go:282] 0 containers: []
	W1213 16:15:11.086238 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:11.086248 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:11.086260 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:11.149204 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:11.149250 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:11.169208 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:11.169240 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:11.239824 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:11.230926    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.231789    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.233414    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.233896    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.235615    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:11.230926    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.231789    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.233414    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.233896    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.235615    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:11.239888 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:11.239913 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:11.265156 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:11.265190 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:13.793650 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:13.804879 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:13.804957 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:13.830496 1542350 cri.go:89] found id: ""
	I1213 16:15:13.830524 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.830534 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:13.830541 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:13.830598 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:13.860289 1542350 cri.go:89] found id: ""
	I1213 16:15:13.860316 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.860325 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:13.860331 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:13.860404 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:13.889862 1542350 cri.go:89] found id: ""
	I1213 16:15:13.889900 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.889909 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:13.889915 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:13.889982 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:13.917096 1542350 cri.go:89] found id: ""
	I1213 16:15:13.917119 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.917127 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:13.917134 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:13.917192 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:13.941374 1542350 cri.go:89] found id: ""
	I1213 16:15:13.941397 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.941406 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:13.941412 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:13.941472 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:13.966429 1542350 cri.go:89] found id: ""
	I1213 16:15:13.966457 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.966467 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:13.966474 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:13.966536 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:13.992124 1542350 cri.go:89] found id: ""
	I1213 16:15:13.992193 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.992217 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:13.992231 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:13.992304 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:14.018581 1542350 cri.go:89] found id: ""
	I1213 16:15:14.018613 1542350 logs.go:282] 0 containers: []
	W1213 16:15:14.018621 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:14.018631 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:14.018643 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:14.076560 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:14.076594 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:14.093391 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:14.093470 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:14.169809 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:14.161020    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.162081    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.163786    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.164233    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.165949    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:14.161020    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.162081    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.163786    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.164233    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.165949    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:14.169831 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:14.169844 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:14.196553 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:14.196588 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:16.730383 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:16.741020 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:16.741091 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:16.765402 1542350 cri.go:89] found id: ""
	I1213 16:15:16.765425 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.765434 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:16.765440 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:16.765498 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:16.791004 1542350 cri.go:89] found id: ""
	I1213 16:15:16.791033 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.791042 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:16.791048 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:16.791112 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:16.816897 1542350 cri.go:89] found id: ""
	I1213 16:15:16.816925 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.816933 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:16.816939 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:16.817002 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:16.861774 1542350 cri.go:89] found id: ""
	I1213 16:15:16.861796 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.861803 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:16.861809 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:16.861868 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:16.895555 1542350 cri.go:89] found id: ""
	I1213 16:15:16.895575 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.895584 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:16.895589 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:16.895650 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:16.923607 1542350 cri.go:89] found id: ""
	I1213 16:15:16.923630 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.923638 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:16.923644 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:16.923705 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:16.952569 1542350 cri.go:89] found id: ""
	I1213 16:15:16.952602 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.952612 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:16.952618 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:16.952681 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:16.982597 1542350 cri.go:89] found id: ""
	I1213 16:15:16.982625 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.982634 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:16.982644 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:16.982657 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:17.040379 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:17.040417 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:17.056673 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:17.056703 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:17.155960 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:17.135813    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.147685    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.148473    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.150347    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.150872    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:17.135813    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.147685    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.148473    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.150347    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.150872    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:17.155984 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:17.155997 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:17.181703 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:17.181742 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:19.710412 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:19.723576 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:19.723654 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:19.752079 1542350 cri.go:89] found id: ""
	I1213 16:15:19.752102 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.752111 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:19.752117 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:19.752198 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:19.776763 1542350 cri.go:89] found id: ""
	I1213 16:15:19.776829 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.776845 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:19.776853 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:19.776912 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:19.803069 1542350 cri.go:89] found id: ""
	I1213 16:15:19.803133 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.803149 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:19.803157 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:19.803216 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:19.828299 1542350 cri.go:89] found id: ""
	I1213 16:15:19.828332 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.828342 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:19.828348 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:19.828419 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:19.858915 1542350 cri.go:89] found id: ""
	I1213 16:15:19.858992 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.859013 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:19.859032 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:19.859127 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:19.889950 1542350 cri.go:89] found id: ""
	I1213 16:15:19.889987 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.889996 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:19.890003 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:19.890076 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:19.915855 1542350 cri.go:89] found id: ""
	I1213 16:15:19.915879 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.915893 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:19.915899 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:19.915958 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:19.945371 1542350 cri.go:89] found id: ""
	I1213 16:15:19.945409 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.945418 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:19.945460 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:19.945484 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:20.004545 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:20.004586 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:20.030075 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:20.030110 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:20.119134 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:20.105779    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.107335    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.108625    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.110278    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.111655    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:20.105779    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.107335    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.108625    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.110278    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.111655    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:20.119228 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:20.119426 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:20.157972 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:20.158017 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:22.690836 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:22.701577 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:22.701651 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:22.725883 1542350 cri.go:89] found id: ""
	I1213 16:15:22.725908 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.725917 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:22.725922 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:22.725980 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:22.750347 1542350 cri.go:89] found id: ""
	I1213 16:15:22.750373 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.750382 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:22.750388 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:22.750446 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:22.773604 1542350 cri.go:89] found id: ""
	I1213 16:15:22.773627 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.773636 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:22.773642 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:22.773699 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:22.798122 1542350 cri.go:89] found id: ""
	I1213 16:15:22.798144 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.798153 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:22.798159 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:22.798216 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:22.825364 1542350 cri.go:89] found id: ""
	I1213 16:15:22.825386 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.825394 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:22.825400 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:22.825463 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:22.860458 1542350 cri.go:89] found id: ""
	I1213 16:15:22.860480 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.860489 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:22.860503 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:22.860560 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:22.888782 1542350 cri.go:89] found id: ""
	I1213 16:15:22.888865 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.888889 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:22.888907 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:22.888991 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:22.917264 1542350 cri.go:89] found id: ""
	I1213 16:15:22.917288 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.917297 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:22.917306 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:22.917318 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:22.947808 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:22.947850 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:23.002868 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:23.002910 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:23.019957 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:23.019988 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:23.095906 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:23.076845    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.077548    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.079269    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.079937    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.083952    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:23.076845    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.077548    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.079269    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.079937    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.083952    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:23.095985 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:23.096017 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:25.625418 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:25.636179 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:25.636256 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:25.660796 1542350 cri.go:89] found id: ""
	I1213 16:15:25.660819 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.660827 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:25.660833 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:25.660890 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:25.692137 1542350 cri.go:89] found id: ""
	I1213 16:15:25.692161 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.692169 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:25.692175 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:25.692234 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:25.722645 1542350 cri.go:89] found id: ""
	I1213 16:15:25.722667 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.722677 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:25.722683 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:25.722741 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:25.746597 1542350 cri.go:89] found id: ""
	I1213 16:15:25.746619 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.746627 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:25.746633 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:25.746690 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:25.773364 1542350 cri.go:89] found id: ""
	I1213 16:15:25.773391 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.773399 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:25.773405 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:25.773464 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:25.798024 1542350 cri.go:89] found id: ""
	I1213 16:15:25.798047 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.798056 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:25.798062 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:25.798140 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:25.824949 1542350 cri.go:89] found id: ""
	I1213 16:15:25.824975 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.824984 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:25.824989 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:25.825065 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:25.851736 1542350 cri.go:89] found id: ""
	I1213 16:15:25.851809 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.851843 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:25.851869 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:25.851910 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:25.868875 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:25.868902 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:25.941457 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:25.933124    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.933785    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.935483    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.935919    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.937566    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:25.933124    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.933785    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.935483    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.935919    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.937566    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:25.941527 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:25.941548 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:25.966625 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:25.966656 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:25.996976 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:25.997004 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:28.556122 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:28.567257 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:28.567352 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:28.592087 1542350 cri.go:89] found id: ""
	I1213 16:15:28.592153 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.592179 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:28.592196 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:28.592293 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:28.616658 1542350 cri.go:89] found id: ""
	I1213 16:15:28.616731 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.616746 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:28.616753 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:28.616822 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:28.640310 1542350 cri.go:89] found id: ""
	I1213 16:15:28.640335 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.640344 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:28.640349 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:28.640412 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:28.665406 1542350 cri.go:89] found id: ""
	I1213 16:15:28.665433 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.665443 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:28.665449 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:28.665508 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:28.690028 1542350 cri.go:89] found id: ""
	I1213 16:15:28.690090 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.690121 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:28.690143 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:28.690247 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:28.714656 1542350 cri.go:89] found id: ""
	I1213 16:15:28.714719 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.714753 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:28.714775 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:28.714862 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:28.741721 1542350 cri.go:89] found id: ""
	I1213 16:15:28.741745 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.741753 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:28.741759 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:28.741860 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:28.770039 1542350 cri.go:89] found id: ""
	I1213 16:15:28.770106 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.770132 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:28.770153 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:28.770191 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:28.794482 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:28.794514 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:28.825722 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:28.825751 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:28.885792 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:28.885826 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:28.902629 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:28.902658 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:28.968699 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:28.960883    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.961407    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.962907    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.963230    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.964863    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:28.960883    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.961407    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.962907    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.963230    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.964863    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:31.469803 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:31.480479 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:31.480600 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:31.512783 1542350 cri.go:89] found id: ""
	I1213 16:15:31.512807 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.512816 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:31.512823 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:31.512881 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:31.539773 1542350 cri.go:89] found id: ""
	I1213 16:15:31.539800 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.539815 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:31.539836 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:31.539915 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:31.564690 1542350 cri.go:89] found id: ""
	I1213 16:15:31.564715 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.564723 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:31.564729 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:31.564791 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:31.589449 1542350 cri.go:89] found id: ""
	I1213 16:15:31.589476 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.589484 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:31.589490 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:31.589550 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:31.614171 1542350 cri.go:89] found id: ""
	I1213 16:15:31.614203 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.614212 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:31.614218 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:31.614278 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:31.641466 1542350 cri.go:89] found id: ""
	I1213 16:15:31.641489 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.641498 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:31.641505 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:31.641563 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:31.665618 1542350 cri.go:89] found id: ""
	I1213 16:15:31.665641 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.665649 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:31.665656 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:31.665715 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:31.694436 1542350 cri.go:89] found id: ""
	I1213 16:15:31.694531 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.694554 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:31.694589 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:31.694621 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:31.720014 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:31.720047 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:31.746773 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:31.746844 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:31.802034 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:31.802070 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:31.819067 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:31.819096 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:31.926406 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:31.917414    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.918134    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.919118    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.920759    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.921377    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:31.917414    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.918134    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.919118    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.920759    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.921377    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:34.427501 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:34.438467 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:34.438539 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:34.469663 1542350 cri.go:89] found id: ""
	I1213 16:15:34.469685 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.469693 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:34.469699 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:34.469763 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:34.497352 1542350 cri.go:89] found id: ""
	I1213 16:15:34.497375 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.497384 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:34.497391 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:34.497449 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:34.522437 1542350 cri.go:89] found id: ""
	I1213 16:15:34.522462 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.522471 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:34.522477 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:34.522533 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:34.546310 1542350 cri.go:89] found id: ""
	I1213 16:15:34.546335 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.546344 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:34.546350 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:34.546410 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:34.570057 1542350 cri.go:89] found id: ""
	I1213 16:15:34.570082 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.570091 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:34.570097 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:34.570154 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:34.597335 1542350 cri.go:89] found id: ""
	I1213 16:15:34.597360 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.597369 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:34.597375 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:34.597438 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:34.622402 1542350 cri.go:89] found id: ""
	I1213 16:15:34.622426 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.622435 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:34.622441 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:34.622501 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:34.647379 1542350 cri.go:89] found id: ""
	I1213 16:15:34.647405 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.647414 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:34.647423 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:34.647435 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:34.707433 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:34.699526    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.700203    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.701513    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.701944    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.703518    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:34.699526    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.700203    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.701513    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.701944    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.703518    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:34.707452 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:34.707464 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:34.732617 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:34.732650 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:34.760551 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:34.760579 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:34.817043 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:34.817078 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:37.335446 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:37.346358 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:37.346480 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:37.375693 1542350 cri.go:89] found id: ""
	I1213 16:15:37.375763 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.375784 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:37.375803 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:37.375896 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:37.401729 1542350 cri.go:89] found id: ""
	I1213 16:15:37.401753 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.401761 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:37.401768 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:37.401832 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:37.426557 1542350 cri.go:89] found id: ""
	I1213 16:15:37.426583 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.426591 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:37.426597 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:37.426659 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:37.452633 1542350 cri.go:89] found id: ""
	I1213 16:15:37.452658 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.452666 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:37.452672 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:37.452731 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:37.476262 1542350 cri.go:89] found id: ""
	I1213 16:15:37.476287 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.476296 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:37.476302 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:37.476388 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:37.501165 1542350 cri.go:89] found id: ""
	I1213 16:15:37.501190 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.501198 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:37.501204 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:37.501285 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:37.524960 1542350 cri.go:89] found id: ""
	I1213 16:15:37.524983 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.524991 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:37.524997 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:37.525055 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:37.550053 1542350 cri.go:89] found id: ""
	I1213 16:15:37.550079 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.550088 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:37.550097 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:37.550109 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:37.613799 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:37.604980    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.605842    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.607596    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.608285    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.609930    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:37.604980    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.605842    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.607596    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.608285    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.609930    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:37.613824 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:37.613837 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:37.638525 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:37.638559 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:37.665937 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:37.665965 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:37.722593 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:37.722628 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:40.238420 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:40.249230 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:40.249314 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:40.273014 1542350 cri.go:89] found id: ""
	I1213 16:15:40.273089 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.273133 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:40.273147 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:40.273227 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:40.298488 1542350 cri.go:89] found id: ""
	I1213 16:15:40.298553 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.298577 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:40.298595 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:40.298679 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:40.323131 1542350 cri.go:89] found id: ""
	I1213 16:15:40.323204 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.323228 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:40.323246 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:40.323368 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:40.360968 1542350 cri.go:89] found id: ""
	I1213 16:15:40.360996 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.361005 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:40.361011 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:40.361081 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:40.392530 1542350 cri.go:89] found id: ""
	I1213 16:15:40.392564 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.392573 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:40.392580 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:40.392648 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:40.428563 1542350 cri.go:89] found id: ""
	I1213 16:15:40.428588 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.428597 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:40.428603 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:40.428686 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:40.453234 1542350 cri.go:89] found id: ""
	I1213 16:15:40.453259 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.453267 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:40.453274 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:40.453373 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:40.477074 1542350 cri.go:89] found id: ""
	I1213 16:15:40.477099 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.477108 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:40.477117 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:40.477144 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:40.503301 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:40.503521 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:40.537464 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:40.537493 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:40.593489 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:40.593526 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:40.609479 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:40.609507 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:40.674540 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:40.665852    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.666546    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.668621    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.669211    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.670710    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:40.665852    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.666546    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.668621    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.669211    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.670710    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:43.175524 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:43.186492 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:43.186570 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:43.210685 1542350 cri.go:89] found id: ""
	I1213 16:15:43.210712 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.210721 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:43.210728 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:43.210787 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:43.237076 1542350 cri.go:89] found id: ""
	I1213 16:15:43.237103 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.237112 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:43.237118 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:43.237177 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:43.264682 1542350 cri.go:89] found id: ""
	I1213 16:15:43.264756 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.264771 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:43.264778 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:43.264842 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:43.290869 1542350 cri.go:89] found id: ""
	I1213 16:15:43.290896 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.290905 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:43.290912 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:43.290976 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:43.316279 1542350 cri.go:89] found id: ""
	I1213 16:15:43.316306 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.316315 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:43.316322 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:43.316383 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:43.354838 1542350 cri.go:89] found id: ""
	I1213 16:15:43.354864 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.354873 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:43.354880 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:43.354957 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:43.391172 1542350 cri.go:89] found id: ""
	I1213 16:15:43.391198 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.391207 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:43.391213 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:43.391274 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:43.418613 1542350 cri.go:89] found id: ""
	I1213 16:15:43.418647 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.418657 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:43.418667 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:43.418680 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:43.435343 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:43.435384 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:43.503984 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:43.495327    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.495856    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.497696    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.498105    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.499619    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:43.495327    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.495856    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.497696    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.498105    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.499619    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:43.504005 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:43.504018 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:43.530844 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:43.530882 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:43.563046 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:43.563079 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:46.121764 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:46.133205 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:46.133278 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:46.159902 1542350 cri.go:89] found id: ""
	I1213 16:15:46.159926 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.159935 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:46.159941 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:46.160016 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:46.189203 1542350 cri.go:89] found id: ""
	I1213 16:15:46.189236 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.189260 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:46.189267 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:46.189336 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:46.214186 1542350 cri.go:89] found id: ""
	I1213 16:15:46.214208 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.214216 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:46.214222 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:46.214281 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:46.244894 1542350 cri.go:89] found id: ""
	I1213 16:15:46.244923 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.244943 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:46.244949 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:46.245015 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:46.270668 1542350 cri.go:89] found id: ""
	I1213 16:15:46.270693 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.270702 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:46.270708 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:46.270771 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:46.296520 1542350 cri.go:89] found id: ""
	I1213 16:15:46.296565 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.296595 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:46.296603 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:46.296684 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:46.322387 1542350 cri.go:89] found id: ""
	I1213 16:15:46.322410 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.322418 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:46.322424 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:46.322492 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:46.359071 1542350 cri.go:89] found id: ""
	I1213 16:15:46.359093 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.359102 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:46.359111 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:46.359121 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:46.397696 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:46.397772 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:46.453341 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:46.453386 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:46.469917 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:46.469945 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:46.531639 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:46.523791    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.524434    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.526137    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.526426    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.527840    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:46.523791    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.524434    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.526137    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.526426    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.527840    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:46.531665 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:46.531678 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:49.058136 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:49.069039 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:49.069109 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:49.103600 1542350 cri.go:89] found id: ""
	I1213 16:15:49.103622 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.103630 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:49.103637 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:49.103694 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:49.133756 1542350 cri.go:89] found id: ""
	I1213 16:15:49.133778 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.133787 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:49.133793 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:49.133850 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:49.159824 1542350 cri.go:89] found id: ""
	I1213 16:15:49.159847 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.159856 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:49.159862 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:49.159919 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:49.188461 1542350 cri.go:89] found id: ""
	I1213 16:15:49.188527 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.188567 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:49.188598 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:49.188677 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:49.212316 1542350 cri.go:89] found id: ""
	I1213 16:15:49.212338 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.212346 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:49.212352 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:49.212424 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:49.236324 1542350 cri.go:89] found id: ""
	I1213 16:15:49.236348 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.236356 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:49.236362 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:49.236423 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:49.262438 1542350 cri.go:89] found id: ""
	I1213 16:15:49.262475 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.262484 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:49.262491 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:49.262578 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:49.292613 1542350 cri.go:89] found id: ""
	I1213 16:15:49.292637 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.292646 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:49.292655 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:49.292667 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:49.350224 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:49.350260 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:49.367633 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:49.367661 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:49.436081 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:49.427382    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.428117    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.429891    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.430411    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.431982    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:49.427382    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.428117    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.429891    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.430411    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.431982    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:49.436102 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:49.436115 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:49.461438 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:49.461474 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:51.994161 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:52.005864 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:52.005962 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:52.032002 1542350 cri.go:89] found id: ""
	I1213 16:15:52.032027 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.032052 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:52.032059 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:52.032118 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:52.058529 1542350 cri.go:89] found id: ""
	I1213 16:15:52.058552 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.058561 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:52.058567 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:52.058627 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:52.085765 1542350 cri.go:89] found id: ""
	I1213 16:15:52.085787 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.085795 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:52.085802 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:52.085860 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:52.113317 1542350 cri.go:89] found id: ""
	I1213 16:15:52.113389 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.113411 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:52.113430 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:52.113512 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:52.144343 1542350 cri.go:89] found id: ""
	I1213 16:15:52.144364 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.144373 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:52.144379 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:52.144450 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:52.170804 1542350 cri.go:89] found id: ""
	I1213 16:15:52.170876 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.170899 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:52.170916 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:52.171015 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:52.200043 1542350 cri.go:89] found id: ""
	I1213 16:15:52.200114 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.200137 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:52.200155 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:52.200254 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:52.226948 1542350 cri.go:89] found id: ""
	I1213 16:15:52.227022 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.227057 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:52.227086 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:52.227120 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:52.282092 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:52.282131 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:52.298201 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:52.298227 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:52.381110 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:52.372691    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.373232    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.374717    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.375078    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.376590    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:52.372691    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.373232    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.374717    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.375078    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.376590    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:52.381134 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:52.381148 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:52.409962 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:52.409994 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:54.942176 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:54.952757 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:54.952836 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:54.977644 1542350 cri.go:89] found id: ""
	I1213 16:15:54.977669 1542350 logs.go:282] 0 containers: []
	W1213 16:15:54.977678 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:54.977684 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:54.977742 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:55.005694 1542350 cri.go:89] found id: ""
	I1213 16:15:55.005722 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.005732 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:55.005740 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:55.005814 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:55.038377 1542350 cri.go:89] found id: ""
	I1213 16:15:55.038411 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.038422 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:55.038428 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:55.038493 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:55.065383 1542350 cri.go:89] found id: ""
	I1213 16:15:55.065417 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.065426 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:55.065433 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:55.065493 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:55.099813 1542350 cri.go:89] found id: ""
	I1213 16:15:55.099841 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.099850 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:55.099856 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:55.099931 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:55.128346 1542350 cri.go:89] found id: ""
	I1213 16:15:55.128368 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.128380 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:55.128387 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:55.128456 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:55.160925 1542350 cri.go:89] found id: ""
	I1213 16:15:55.160957 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.160966 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:55.160973 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:55.161037 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:55.188105 1542350 cri.go:89] found id: ""
	I1213 16:15:55.188132 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.188141 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:55.188151 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:55.188164 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:55.218869 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:55.218893 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:55.274258 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:55.274294 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:55.290251 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:55.290280 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:55.359521 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:55.350886    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.351709    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.353380    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.353940    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.355564    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:55.350886    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.351709    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.353380    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.353940    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.355564    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:55.359543 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:55.359556 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:57.887804 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:57.898226 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:57.898297 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:57.922697 1542350 cri.go:89] found id: ""
	I1213 16:15:57.922723 1542350 logs.go:282] 0 containers: []
	W1213 16:15:57.922732 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:57.922740 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:57.922821 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:57.947431 1542350 cri.go:89] found id: ""
	I1213 16:15:57.947457 1542350 logs.go:282] 0 containers: []
	W1213 16:15:57.947467 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:57.947473 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:57.947532 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:57.971494 1542350 cri.go:89] found id: ""
	I1213 16:15:57.971557 1542350 logs.go:282] 0 containers: []
	W1213 16:15:57.971582 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:57.971601 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:57.971679 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:57.999470 1542350 cri.go:89] found id: ""
	I1213 16:15:57.999495 1542350 logs.go:282] 0 containers: []
	W1213 16:15:57.999504 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:57.999510 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:57.999572 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:58.028740 1542350 cri.go:89] found id: ""
	I1213 16:15:58.028767 1542350 logs.go:282] 0 containers: []
	W1213 16:15:58.028777 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:58.028783 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:58.028849 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:58.054022 1542350 cri.go:89] found id: ""
	I1213 16:15:58.054043 1542350 logs.go:282] 0 containers: []
	W1213 16:15:58.054053 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:58.054059 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:58.054121 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:58.096720 1542350 cri.go:89] found id: ""
	I1213 16:15:58.096749 1542350 logs.go:282] 0 containers: []
	W1213 16:15:58.096758 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:58.096765 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:58.096825 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:58.133084 1542350 cri.go:89] found id: ""
	I1213 16:15:58.133114 1542350 logs.go:282] 0 containers: []
	W1213 16:15:58.133123 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:58.133133 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:58.133144 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:58.198401 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:58.198437 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:58.216601 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:58.216683 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:58.288456 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:58.279720    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.280528    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.282071    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.282726    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.283743    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:58.279720    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.280528    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.282071    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.282726    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.283743    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:58.288523 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:58.288544 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:58.314432 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:58.314470 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:00.851874 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:00.862470 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:00.862540 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:00.886360 1542350 cri.go:89] found id: ""
	I1213 16:16:00.886384 1542350 logs.go:282] 0 containers: []
	W1213 16:16:00.886392 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:00.886398 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:00.886458 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:00.910826 1542350 cri.go:89] found id: ""
	I1213 16:16:00.910851 1542350 logs.go:282] 0 containers: []
	W1213 16:16:00.910861 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:00.910867 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:00.910925 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:00.935111 1542350 cri.go:89] found id: ""
	I1213 16:16:00.935141 1542350 logs.go:282] 0 containers: []
	W1213 16:16:00.935150 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:00.935156 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:00.935214 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:00.960959 1542350 cri.go:89] found id: ""
	I1213 16:16:00.960982 1542350 logs.go:282] 0 containers: []
	W1213 16:16:00.960991 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:00.960997 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:00.961057 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:00.985954 1542350 cri.go:89] found id: ""
	I1213 16:16:00.985977 1542350 logs.go:282] 0 containers: []
	W1213 16:16:00.985986 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:00.985991 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:00.986052 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:01.011865 1542350 cri.go:89] found id: ""
	I1213 16:16:01.011889 1542350 logs.go:282] 0 containers: []
	W1213 16:16:01.011897 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:01.011903 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:01.011966 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:01.041391 1542350 cri.go:89] found id: ""
	I1213 16:16:01.041412 1542350 logs.go:282] 0 containers: []
	W1213 16:16:01.041421 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:01.041427 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:01.041486 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:01.065980 1542350 cri.go:89] found id: ""
	I1213 16:16:01.066001 1542350 logs.go:282] 0 containers: []
	W1213 16:16:01.066010 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:01.066020 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:01.066035 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:01.125520 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:01.125602 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:01.143155 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:01.143228 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:01.224569 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:01.213708    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.214378    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.216451    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.217392    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.219480    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:01.213708    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.214378    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.216451    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.217392    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.219480    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:01.224588 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:01.224602 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:01.251006 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:01.251045 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:03.780250 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:03.794327 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:03.794399 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:03.819181 1542350 cri.go:89] found id: ""
	I1213 16:16:03.819209 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.819218 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:03.819224 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:03.819285 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:03.845225 1542350 cri.go:89] found id: ""
	I1213 16:16:03.845248 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.845257 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:03.845264 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:03.845324 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:03.873944 1542350 cri.go:89] found id: ""
	I1213 16:16:03.873966 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.873975 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:03.873981 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:03.874042 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:03.899655 1542350 cri.go:89] found id: ""
	I1213 16:16:03.899685 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.899694 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:03.899701 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:03.899763 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:03.927094 1542350 cri.go:89] found id: ""
	I1213 16:16:03.927122 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.927131 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:03.927137 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:03.927196 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:03.952240 1542350 cri.go:89] found id: ""
	I1213 16:16:03.952267 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.952276 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:03.952282 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:03.952340 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:03.976494 1542350 cri.go:89] found id: ""
	I1213 16:16:03.976520 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.976529 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:03.976535 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:03.976605 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:04.001277 1542350 cri.go:89] found id: ""
	I1213 16:16:04.001304 1542350 logs.go:282] 0 containers: []
	W1213 16:16:04.001313 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:04.001324 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:04.001339 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:04.061393 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:04.061428 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:04.078258 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:04.078290 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:04.162687 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:04.153708    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.154478    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.156333    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.156798    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.158424    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:04.153708    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.154478    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.156333    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.156798    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.158424    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:04.162710 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:04.162723 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:04.187844 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:04.187879 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:06.716865 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:06.727125 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:06.727193 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:06.752991 1542350 cri.go:89] found id: ""
	I1213 16:16:06.753015 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.753024 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:06.753030 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:06.753089 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:06.777092 1542350 cri.go:89] found id: ""
	I1213 16:16:06.777116 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.777125 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:06.777130 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:06.777188 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:06.805182 1542350 cri.go:89] found id: ""
	I1213 16:16:06.805256 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.805278 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:06.805292 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:06.805363 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:06.833454 1542350 cri.go:89] found id: ""
	I1213 16:16:06.833477 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.833486 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:06.833492 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:06.833553 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:06.864279 1542350 cri.go:89] found id: ""
	I1213 16:16:06.864303 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.864311 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:06.864318 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:06.864379 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:06.889879 1542350 cri.go:89] found id: ""
	I1213 16:16:06.889905 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.889914 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:06.889920 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:06.889980 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:06.913566 1542350 cri.go:89] found id: ""
	I1213 16:16:06.913600 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.913609 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:06.913615 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:06.913682 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:06.939090 1542350 cri.go:89] found id: ""
	I1213 16:16:06.939161 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.939199 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:06.939226 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:06.939253 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:06.994546 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:06.994587 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:07.012062 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:07.012099 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:07.079574 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:07.070333    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.070833    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.072777    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.073334    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.074988    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:07.070333    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.070833    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.072777    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.073334    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.074988    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:07.079597 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:07.079609 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:07.106688 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:07.106772 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:09.648446 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:09.659497 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:09.659572 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:09.685004 1542350 cri.go:89] found id: ""
	I1213 16:16:09.685031 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.685040 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:09.685047 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:09.685106 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:09.710322 1542350 cri.go:89] found id: ""
	I1213 16:16:09.710350 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.710359 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:09.710365 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:09.710424 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:09.736183 1542350 cri.go:89] found id: ""
	I1213 16:16:09.736209 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.736218 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:09.736225 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:09.736328 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:09.761808 1542350 cri.go:89] found id: ""
	I1213 16:16:09.761831 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.761839 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:09.761846 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:09.761907 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:09.788666 1542350 cri.go:89] found id: ""
	I1213 16:16:09.788690 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.788699 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:09.788705 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:09.788767 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:09.815565 1542350 cri.go:89] found id: ""
	I1213 16:16:09.815590 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.815598 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:09.815604 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:09.815663 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:09.841443 1542350 cri.go:89] found id: ""
	I1213 16:16:09.841466 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.841475 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:09.841481 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:09.841538 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:09.870775 1542350 cri.go:89] found id: ""
	I1213 16:16:09.870798 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.870806 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:09.870818 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:09.870829 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:09.927243 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:09.927279 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:09.944116 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:09.944150 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:10.018299 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:10.008262    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.009034    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.011237    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.011729    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.013703    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:10.008262    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.009034    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.011237    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.011729    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.013703    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:10.018334 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:10.018348 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:10.062337 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:10.062384 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:12.610748 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:12.622191 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:12.622266 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:12.654912 1542350 cri.go:89] found id: ""
	I1213 16:16:12.654939 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.654948 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:12.654955 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:12.655017 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:12.679878 1542350 cri.go:89] found id: ""
	I1213 16:16:12.679904 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.679913 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:12.679919 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:12.679981 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:12.708594 1542350 cri.go:89] found id: ""
	I1213 16:16:12.708619 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.708628 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:12.708641 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:12.708703 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:12.734832 1542350 cri.go:89] found id: ""
	I1213 16:16:12.734857 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.734866 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:12.734872 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:12.734931 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:12.760756 1542350 cri.go:89] found id: ""
	I1213 16:16:12.760784 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.760793 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:12.760799 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:12.760860 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:12.786434 1542350 cri.go:89] found id: ""
	I1213 16:16:12.786470 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.786479 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:12.786486 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:12.786558 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:12.810666 1542350 cri.go:89] found id: ""
	I1213 16:16:12.810699 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.810708 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:12.810714 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:12.810779 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:12.835161 1542350 cri.go:89] found id: ""
	I1213 16:16:12.835206 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.835216 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:12.835225 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:12.835238 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:12.851412 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:12.851438 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:12.919002 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:12.910126    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.910878    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.912652    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.913309    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.915168    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:12.910126    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.910878    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.912652    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.913309    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.915168    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:12.919032 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:12.919045 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:12.945016 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:12.945054 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:12.975303 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:12.975353 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:15.533437 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:15.545434 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:15.545514 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:15.570277 1542350 cri.go:89] found id: ""
	I1213 16:16:15.570303 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.570353 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:15.570362 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:15.570427 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:15.602983 1542350 cri.go:89] found id: ""
	I1213 16:16:15.603009 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.603017 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:15.603023 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:15.603082 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:15.631137 1542350 cri.go:89] found id: ""
	I1213 16:16:15.631172 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.631181 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:15.631187 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:15.631245 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:15.664783 1542350 cri.go:89] found id: ""
	I1213 16:16:15.664810 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.664819 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:15.664825 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:15.664886 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:15.691237 1542350 cri.go:89] found id: ""
	I1213 16:16:15.691264 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.691274 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:15.691280 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:15.691368 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:15.715449 1542350 cri.go:89] found id: ""
	I1213 16:16:15.715473 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.715482 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:15.715489 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:15.715553 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:15.740667 1542350 cri.go:89] found id: ""
	I1213 16:16:15.740692 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.740701 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:15.740707 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:15.740770 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:15.765160 1542350 cri.go:89] found id: ""
	I1213 16:16:15.765182 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.765191 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:15.765200 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:15.765212 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:15.820427 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:15.820466 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:15.836513 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:15.836541 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:15.903389 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:15.894908    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.895741    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.897308    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.897848    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.899415    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:15.894908    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.895741    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.897308    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.897848    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.899415    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:15.903412 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:15.903427 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:15.928787 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:15.928825 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:18.458780 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:18.469268 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:18.469341 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:18.497781 1542350 cri.go:89] found id: ""
	I1213 16:16:18.497811 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.497824 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:18.497831 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:18.497918 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:18.522772 1542350 cri.go:89] found id: ""
	I1213 16:16:18.522799 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.522808 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:18.522815 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:18.522874 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:18.549419 1542350 cri.go:89] found id: ""
	I1213 16:16:18.549443 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.549452 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:18.549457 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:18.549524 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:18.573853 1542350 cri.go:89] found id: ""
	I1213 16:16:18.573881 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.573889 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:18.573896 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:18.573960 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:18.604140 1542350 cri.go:89] found id: ""
	I1213 16:16:18.604167 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.604188 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:18.604194 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:18.604264 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:18.637649 1542350 cri.go:89] found id: ""
	I1213 16:16:18.637677 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.637686 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:18.637692 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:18.637752 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:18.668019 1542350 cri.go:89] found id: ""
	I1213 16:16:18.668045 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.668053 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:18.668059 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:18.668120 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:18.694456 1542350 cri.go:89] found id: ""
	I1213 16:16:18.694482 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.694493 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:18.694503 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:18.694515 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:18.722967 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:18.722995 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:18.780808 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:18.780844 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:18.797393 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:18.797421 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:18.866061 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:18.858113    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.858623    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.860137    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.860610    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.862072    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:18.858113    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.858623    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.860137    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.860610    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.862072    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:18.866083 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:18.866096 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:21.391436 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:21.403266 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:21.403363 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:21.429372 1542350 cri.go:89] found id: ""
	I1213 16:16:21.429405 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.429415 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:21.429420 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:21.429479 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:21.454218 1542350 cri.go:89] found id: ""
	I1213 16:16:21.454287 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.454311 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:21.454329 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:21.454420 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:21.478016 1542350 cri.go:89] found id: ""
	I1213 16:16:21.478041 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.478049 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:21.478055 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:21.478112 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:21.504574 1542350 cri.go:89] found id: ""
	I1213 16:16:21.504612 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.504622 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:21.504629 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:21.504692 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:21.531727 1542350 cri.go:89] found id: ""
	I1213 16:16:21.531761 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.531770 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:21.531777 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:21.531836 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:21.556964 1542350 cri.go:89] found id: ""
	I1213 16:16:21.556999 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.557010 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:21.557018 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:21.557077 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:21.592445 1542350 cri.go:89] found id: ""
	I1213 16:16:21.592509 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.592533 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:21.592550 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:21.592645 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:21.620898 1542350 cri.go:89] found id: ""
	I1213 16:16:21.620920 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.620928 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:21.620937 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:21.620949 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:21.682810 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:21.682846 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:21.699275 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:21.699375 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:21.766336 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:21.758580    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.759107    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.760601    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.761028    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.762478    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:21.758580    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.759107    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.760601    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.761028    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.762478    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:21.766397 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:21.766426 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:21.791266 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:21.791300 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:24.319481 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:24.330216 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:24.330310 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:24.369003 1542350 cri.go:89] found id: ""
	I1213 16:16:24.369033 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.369041 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:24.369047 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:24.369106 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:24.396473 1542350 cri.go:89] found id: ""
	I1213 16:16:24.396502 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.396511 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:24.396516 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:24.396580 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:24.436915 1542350 cri.go:89] found id: ""
	I1213 16:16:24.436939 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.436948 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:24.436953 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:24.437013 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:24.465118 1542350 cri.go:89] found id: ""
	I1213 16:16:24.465139 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.465147 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:24.465153 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:24.465211 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:24.490097 1542350 cri.go:89] found id: ""
	I1213 16:16:24.490121 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.490130 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:24.490136 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:24.490196 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:24.520031 1542350 cri.go:89] found id: ""
	I1213 16:16:24.520096 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.520120 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:24.520141 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:24.520214 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:24.545891 1542350 cri.go:89] found id: ""
	I1213 16:16:24.545919 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.545928 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:24.545933 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:24.546014 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:24.574276 1542350 cri.go:89] found id: ""
	I1213 16:16:24.574313 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.574323 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:24.574353 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:24.574387 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:24.611068 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:24.611145 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:24.677764 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:24.677808 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:24.696759 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:24.696802 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:24.773564 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:24.765018    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.765700    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.767375    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.767981    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.769749    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:24.765018    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.765700    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.767375    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.767981    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.769749    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:24.773586 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:24.773598 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:27.299826 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:27.310825 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:27.310902 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:27.341771 1542350 cri.go:89] found id: ""
	I1213 16:16:27.341794 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.341803 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:27.341810 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:27.341876 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:27.369884 1542350 cri.go:89] found id: ""
	I1213 16:16:27.369908 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.369917 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:27.369923 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:27.369988 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:27.402575 1542350 cri.go:89] found id: ""
	I1213 16:16:27.402598 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.402606 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:27.402612 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:27.402680 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:27.429116 1542350 cri.go:89] found id: ""
	I1213 16:16:27.429157 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.429169 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:27.429176 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:27.429245 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:27.456147 1542350 cri.go:89] found id: ""
	I1213 16:16:27.456174 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.456183 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:27.456191 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:27.456254 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:27.481262 1542350 cri.go:89] found id: ""
	I1213 16:16:27.481288 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.481297 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:27.481304 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:27.481370 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:27.507140 1542350 cri.go:89] found id: ""
	I1213 16:16:27.507169 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.507179 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:27.507185 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:27.507269 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:27.532060 1542350 cri.go:89] found id: ""
	I1213 16:16:27.532139 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.532162 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:27.532180 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:27.532193 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:27.588083 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:27.588123 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:27.605875 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:27.605906 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:27.677799 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:27.667405    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.668242    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.669729    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.670202    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.673720    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:27.667405    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.668242    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.669729    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.670202    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.673720    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:27.677822 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:27.677834 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:27.703668 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:27.703704 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:30.232616 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:30.244334 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:30.244408 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:30.269730 1542350 cri.go:89] found id: ""
	I1213 16:16:30.269757 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.269765 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:30.269771 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:30.269830 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:30.296665 1542350 cri.go:89] found id: ""
	I1213 16:16:30.296693 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.296702 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:30.296709 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:30.296832 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:30.322172 1542350 cri.go:89] found id: ""
	I1213 16:16:30.322251 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.322276 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:30.322296 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:30.322405 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:30.364083 1542350 cri.go:89] found id: ""
	I1213 16:16:30.364113 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.364125 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:30.364138 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:30.364206 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:30.405727 1542350 cri.go:89] found id: ""
	I1213 16:16:30.405751 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.405759 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:30.405765 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:30.405825 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:30.432819 1542350 cri.go:89] found id: ""
	I1213 16:16:30.432846 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.432855 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:30.432862 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:30.432921 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:30.458202 1542350 cri.go:89] found id: ""
	I1213 16:16:30.458228 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.458237 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:30.458243 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:30.458310 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:30.482950 1542350 cri.go:89] found id: ""
	I1213 16:16:30.482977 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.482987 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:30.482996 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:30.483008 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:30.507886 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:30.507921 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:30.538090 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:30.538159 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:30.593644 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:30.593729 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:30.610246 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:30.610272 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:30.684359 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:30.676539    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.677025    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.678556    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.678874    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.680436    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:30.676539    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.677025    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.678556    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.678874    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.680436    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:33.184602 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:33.195455 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:33.195556 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:33.225437 1542350 cri.go:89] found id: ""
	I1213 16:16:33.225459 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.225468 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:33.225474 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:33.225541 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:33.250024 1542350 cri.go:89] found id: ""
	I1213 16:16:33.250089 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.250113 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:33.250131 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:33.250218 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:33.275721 1542350 cri.go:89] found id: ""
	I1213 16:16:33.275747 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.275755 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:33.275762 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:33.275823 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:33.300346 1542350 cri.go:89] found id: ""
	I1213 16:16:33.300368 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.300377 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:33.300383 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:33.300442 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:33.324866 1542350 cri.go:89] found id: ""
	I1213 16:16:33.324889 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.324897 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:33.324904 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:33.324963 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:33.354142 1542350 cri.go:89] found id: ""
	I1213 16:16:33.354216 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.354239 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:33.354257 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:33.354347 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:33.388195 1542350 cri.go:89] found id: ""
	I1213 16:16:33.388216 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.388224 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:33.388230 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:33.388286 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:33.416283 1542350 cri.go:89] found id: ""
	I1213 16:16:33.416306 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.416314 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:33.416325 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:33.416337 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:33.432175 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:33.432206 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:33.499040 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:33.490874    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.491494    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.493033    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.493510    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.495051    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:33.490874    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.491494    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.493033    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.493510    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.495051    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:33.499062 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:33.499074 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:33.524925 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:33.524958 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:33.554998 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:33.555026 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:36.110953 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:36.121861 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:36.121930 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:36.146369 1542350 cri.go:89] found id: ""
	I1213 16:16:36.146429 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.146450 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:36.146476 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:36.146557 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:36.171595 1542350 cri.go:89] found id: ""
	I1213 16:16:36.171617 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.171625 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:36.171631 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:36.171693 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:36.196869 1542350 cri.go:89] found id: ""
	I1213 16:16:36.196891 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.196900 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:36.196906 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:36.196963 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:36.221290 1542350 cri.go:89] found id: ""
	I1213 16:16:36.221317 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.221326 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:36.221338 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:36.221400 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:36.246254 1542350 cri.go:89] found id: ""
	I1213 16:16:36.246280 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.246289 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:36.246294 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:36.246352 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:36.276463 1542350 cri.go:89] found id: ""
	I1213 16:16:36.276486 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.276494 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:36.276500 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:36.276565 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:36.302414 1542350 cri.go:89] found id: ""
	I1213 16:16:36.302446 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.302454 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:36.302460 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:36.302530 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:36.327676 1542350 cri.go:89] found id: ""
	I1213 16:16:36.327753 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.327770 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:36.327781 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:36.327793 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:36.347589 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:36.347658 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:36.422910 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:36.414535    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.415436    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.417255    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.417652    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.419100    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:36.414535    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.415436    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.417255    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.417652    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.419100    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:36.422940 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:36.422968 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:36.449077 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:36.449114 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:36.476904 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:36.476935 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:39.032927 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:39.043398 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:39.043466 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:39.068941 1542350 cri.go:89] found id: ""
	I1213 16:16:39.068968 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.068977 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:39.068983 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:39.069040 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:39.094525 1542350 cri.go:89] found id: ""
	I1213 16:16:39.094548 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.094557 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:39.094564 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:39.094626 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:39.118854 1542350 cri.go:89] found id: ""
	I1213 16:16:39.118875 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.118884 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:39.118890 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:39.118946 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:39.147615 1542350 cri.go:89] found id: ""
	I1213 16:16:39.147642 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.147651 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:39.147657 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:39.147719 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:39.173015 1542350 cri.go:89] found id: ""
	I1213 16:16:39.173038 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.173047 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:39.173053 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:39.173121 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:39.198427 1542350 cri.go:89] found id: ""
	I1213 16:16:39.198453 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.198462 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:39.198468 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:39.198525 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:39.223491 1542350 cri.go:89] found id: ""
	I1213 16:16:39.223514 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.223522 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:39.223528 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:39.223587 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:39.254117 1542350 cri.go:89] found id: ""
	I1213 16:16:39.254148 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.254157 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:39.254166 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:39.254178 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:39.313667 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:39.313706 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:39.331137 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:39.331215 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:39.414971 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:39.406302    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.407133    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.408880    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.409415    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.411052    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:39.406302    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.407133    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.408880    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.409415    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.411052    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:39.414990 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:39.415003 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:39.440561 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:39.440604 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:41.973087 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:41.983385 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:41.983456 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:42.010547 1542350 cri.go:89] found id: ""
	I1213 16:16:42.010644 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.010658 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:42.010666 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:42.010780 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:42.041355 1542350 cri.go:89] found id: ""
	I1213 16:16:42.041379 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.041388 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:42.041394 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:42.041462 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:42.074781 1542350 cri.go:89] found id: ""
	I1213 16:16:42.074808 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.074818 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:42.074825 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:42.074895 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:42.105943 1542350 cri.go:89] found id: ""
	I1213 16:16:42.105972 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.105980 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:42.105987 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:42.106062 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:42.144036 1542350 cri.go:89] found id: ""
	I1213 16:16:42.144062 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.144070 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:42.144077 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:42.144144 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:42.177438 1542350 cri.go:89] found id: ""
	I1213 16:16:42.177464 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.177474 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:42.177482 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:42.177555 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:42.209616 1542350 cri.go:89] found id: ""
	I1213 16:16:42.209644 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.209653 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:42.209662 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:42.209730 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:42.240251 1542350 cri.go:89] found id: ""
	I1213 16:16:42.240283 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.240293 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:42.240303 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:42.240317 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:42.274974 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:42.275008 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:42.333409 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:42.333488 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:42.353909 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:42.353998 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:42.431547 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:42.422852    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.423662    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.425409    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.425733    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.427251    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:42.422852    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.423662    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.425409    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.425733    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.427251    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:42.431570 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:42.431582 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:44.957982 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:44.968708 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:44.968778 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:44.998179 1542350 cri.go:89] found id: ""
	I1213 16:16:44.998205 1542350 logs.go:282] 0 containers: []
	W1213 16:16:44.998214 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:44.998220 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:44.998281 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:45.055672 1542350 cri.go:89] found id: ""
	I1213 16:16:45.055695 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.055705 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:45.055712 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:45.055785 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:45.112504 1542350 cri.go:89] found id: ""
	I1213 16:16:45.112598 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.112625 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:45.112646 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:45.112821 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:45.148966 1542350 cri.go:89] found id: ""
	I1213 16:16:45.148993 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.149002 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:45.149008 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:45.149081 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:45.215276 1542350 cri.go:89] found id: ""
	I1213 16:16:45.215383 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.215547 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:45.215573 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:45.215685 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:45.266343 1542350 cri.go:89] found id: ""
	I1213 16:16:45.266422 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.266448 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:45.266469 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:45.266569 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:45.311801 1542350 cri.go:89] found id: ""
	I1213 16:16:45.311877 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.311905 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:45.311925 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:45.312039 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:45.345856 1542350 cri.go:89] found id: ""
	I1213 16:16:45.345884 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.345894 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:45.345904 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:45.345928 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:45.416309 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:45.416392 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:45.433509 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:45.433593 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:45.504820 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:45.495815    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.496673    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.498435    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.498745    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.500220    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:45.495815    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.496673    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.498435    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.498745    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.500220    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:45.504841 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:45.504855 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:45.530797 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:45.530836 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:48.061294 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:48.072582 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:48.072653 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:48.101139 1542350 cri.go:89] found id: ""
	I1213 16:16:48.101164 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.101173 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:48.101179 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:48.101250 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:48.127077 1542350 cri.go:89] found id: ""
	I1213 16:16:48.127100 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.127109 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:48.127115 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:48.127179 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:48.152708 1542350 cri.go:89] found id: ""
	I1213 16:16:48.152731 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.152740 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:48.152746 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:48.152806 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:48.183194 1542350 cri.go:89] found id: ""
	I1213 16:16:48.183220 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.183228 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:48.183235 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:48.183295 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:48.208544 1542350 cri.go:89] found id: ""
	I1213 16:16:48.208612 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.208638 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:48.208658 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:48.208773 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:48.234599 1542350 cri.go:89] found id: ""
	I1213 16:16:48.234633 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.234642 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:48.234667 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:48.234745 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:48.259586 1542350 cri.go:89] found id: ""
	I1213 16:16:48.259614 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.259623 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:48.259629 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:48.259712 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:48.283477 1542350 cri.go:89] found id: ""
	I1213 16:16:48.283499 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.283509 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:48.283542 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:48.283561 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:48.339116 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:48.339190 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:48.360686 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:48.360767 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:48.433619 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:48.425212    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.425811    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.427513    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.427900    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.429452    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:48.425212    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.425811    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.427513    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.427900    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.429452    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:48.433643 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:48.433655 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:48.458793 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:48.458837 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:50.988521 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:50.999862 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:50.999930 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:51.029019 1542350 cri.go:89] found id: ""
	I1213 16:16:51.029045 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.029054 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:51.029060 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:51.029132 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:51.058195 1542350 cri.go:89] found id: ""
	I1213 16:16:51.058222 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.058231 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:51.058237 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:51.058297 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:51.083486 1542350 cri.go:89] found id: ""
	I1213 16:16:51.083512 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.083521 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:51.083527 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:51.083589 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:51.108698 1542350 cri.go:89] found id: ""
	I1213 16:16:51.108723 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.108733 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:51.108739 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:51.108801 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:51.133979 1542350 cri.go:89] found id: ""
	I1213 16:16:51.134003 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.134011 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:51.134017 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:51.134074 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:51.161527 1542350 cri.go:89] found id: ""
	I1213 16:16:51.161552 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.161562 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:51.161568 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:51.161627 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:51.186814 1542350 cri.go:89] found id: ""
	I1213 16:16:51.186841 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.186850 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:51.186856 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:51.186916 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:51.216180 1542350 cri.go:89] found id: ""
	I1213 16:16:51.216212 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.216221 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:51.216230 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:51.216245 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:51.273877 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:51.273919 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:51.291469 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:51.291502 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:51.365379 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:51.356884    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.357610    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.359231    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.359765    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.361313    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:51.356884    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.357610    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.359231    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.359765    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.361313    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:51.365447 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:51.365471 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:51.393925 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:51.393997 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:53.927124 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:53.937787 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:53.937865 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:53.965198 1542350 cri.go:89] found id: ""
	I1213 16:16:53.965222 1542350 logs.go:282] 0 containers: []
	W1213 16:16:53.965230 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:53.965236 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:53.965295 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:53.990127 1542350 cri.go:89] found id: ""
	I1213 16:16:53.990153 1542350 logs.go:282] 0 containers: []
	W1213 16:16:53.990162 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:53.990168 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:53.990227 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:54.017573 1542350 cri.go:89] found id: ""
	I1213 16:16:54.017600 1542350 logs.go:282] 0 containers: []
	W1213 16:16:54.017610 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:54.017627 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:54.017691 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:54.042201 1542350 cri.go:89] found id: ""
	I1213 16:16:54.042223 1542350 logs.go:282] 0 containers: []
	W1213 16:16:54.042232 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:54.042239 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:54.042297 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:54.069040 1542350 cri.go:89] found id: ""
	I1213 16:16:54.069064 1542350 logs.go:282] 0 containers: []
	W1213 16:16:54.069072 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:54.069079 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:54.069139 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:54.094593 1542350 cri.go:89] found id: ""
	I1213 16:16:54.094614 1542350 logs.go:282] 0 containers: []
	W1213 16:16:54.094624 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:54.094630 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:54.094692 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:54.118976 1542350 cri.go:89] found id: ""
	I1213 16:16:54.119047 1542350 logs.go:282] 0 containers: []
	W1213 16:16:54.119070 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:54.119088 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:54.119162 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:54.145323 1542350 cri.go:89] found id: ""
	I1213 16:16:54.145346 1542350 logs.go:282] 0 containers: []
	W1213 16:16:54.145355 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:54.145364 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:54.145375 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:54.170838 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:54.170873 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:54.198725 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:54.198752 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:54.253610 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:54.253646 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:54.272399 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:54.272428 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:54.360945 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:54.351253    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.352548    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.354388    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.355099    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.356700    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:54.351253    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.352548    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.354388    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.355099    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.356700    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:56.861910 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:56.873998 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:56.874110 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:56.904398 1542350 cri.go:89] found id: ""
	I1213 16:16:56.904423 1542350 logs.go:282] 0 containers: []
	W1213 16:16:56.904432 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:56.904438 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:56.904498 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:56.928756 1542350 cri.go:89] found id: ""
	I1213 16:16:56.928783 1542350 logs.go:282] 0 containers: []
	W1213 16:16:56.928792 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:56.928798 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:56.928856 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:56.952449 1542350 cri.go:89] found id: ""
	I1213 16:16:56.952473 1542350 logs.go:282] 0 containers: []
	W1213 16:16:56.952481 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:56.952487 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:56.952544 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:56.976949 1542350 cri.go:89] found id: ""
	I1213 16:16:56.976973 1542350 logs.go:282] 0 containers: []
	W1213 16:16:56.976981 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:56.976988 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:56.977074 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:57.001996 1542350 cri.go:89] found id: ""
	I1213 16:16:57.002023 1542350 logs.go:282] 0 containers: []
	W1213 16:16:57.002032 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:57.002039 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:57.002107 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:57.033494 1542350 cri.go:89] found id: ""
	I1213 16:16:57.033519 1542350 logs.go:282] 0 containers: []
	W1213 16:16:57.033527 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:57.033533 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:57.033590 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:57.057055 1542350 cri.go:89] found id: ""
	I1213 16:16:57.057082 1542350 logs.go:282] 0 containers: []
	W1213 16:16:57.057090 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:57.057096 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:57.057153 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:57.086023 1542350 cri.go:89] found id: ""
	I1213 16:16:57.086047 1542350 logs.go:282] 0 containers: []
	W1213 16:16:57.086057 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:57.086066 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:57.086078 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:57.140604 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:57.140639 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:57.156471 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:57.156501 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:57.226365 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:57.217689   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.218500   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.220107   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.220657   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.222574   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:57.217689   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.218500   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.220107   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.220657   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.222574   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:57.226409 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:57.226425 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:57.251875 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:57.251911 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:59.781524 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:59.792544 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:59.792620 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:59.817081 1542350 cri.go:89] found id: ""
	I1213 16:16:59.817108 1542350 logs.go:282] 0 containers: []
	W1213 16:16:59.817123 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:59.817130 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:59.817197 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:59.854425 1542350 cri.go:89] found id: ""
	I1213 16:16:59.854453 1542350 logs.go:282] 0 containers: []
	W1213 16:16:59.854463 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:59.854469 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:59.854529 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:59.891724 1542350 cri.go:89] found id: ""
	I1213 16:16:59.891750 1542350 logs.go:282] 0 containers: []
	W1213 16:16:59.891759 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:59.891766 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:59.891826 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:59.921656 1542350 cri.go:89] found id: ""
	I1213 16:16:59.921682 1542350 logs.go:282] 0 containers: []
	W1213 16:16:59.921691 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:59.921697 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:59.921757 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:59.946905 1542350 cri.go:89] found id: ""
	I1213 16:16:59.946930 1542350 logs.go:282] 0 containers: []
	W1213 16:16:59.946943 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:59.946949 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:59.947011 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:59.974061 1542350 cri.go:89] found id: ""
	I1213 16:16:59.974087 1542350 logs.go:282] 0 containers: []
	W1213 16:16:59.974096 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:59.974103 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:59.974181 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:00.003912 1542350 cri.go:89] found id: ""
	I1213 16:17:00.003945 1542350 logs.go:282] 0 containers: []
	W1213 16:17:00.003955 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:00.003962 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:00.004041 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:00.129167 1542350 cri.go:89] found id: ""
	I1213 16:17:00.129242 1542350 logs.go:282] 0 containers: []
	W1213 16:17:00.129267 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:00.129291 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:00.129321 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:00.325276 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:00.316397   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.317316   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.318748   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.319511   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.320676   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:00.316397   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.317316   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.318748   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.319511   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.320676   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:00.325303 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:00.325317 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:00.357630 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:00.357684 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:00.417887 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:00.417929 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:00.512817 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:00.512861 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:03.034231 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:03.045928 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:03.046041 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:03.073150 1542350 cri.go:89] found id: ""
	I1213 16:17:03.073178 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.073187 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:03.073194 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:03.073257 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:03.100010 1542350 cri.go:89] found id: ""
	I1213 16:17:03.100036 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.100046 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:03.100052 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:03.100118 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:03.126901 1542350 cri.go:89] found id: ""
	I1213 16:17:03.126929 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.126938 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:03.126944 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:03.127007 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:03.158512 1542350 cri.go:89] found id: ""
	I1213 16:17:03.158538 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.158547 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:03.158554 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:03.158623 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:03.186730 1542350 cri.go:89] found id: ""
	I1213 16:17:03.186757 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.186766 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:03.186773 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:03.186843 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:03.213877 1542350 cri.go:89] found id: ""
	I1213 16:17:03.213913 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.213922 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:03.213929 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:03.214000 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:03.244284 1542350 cri.go:89] found id: ""
	I1213 16:17:03.244360 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.244382 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:03.244401 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:03.244496 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:03.272102 1542350 cri.go:89] found id: ""
	I1213 16:17:03.272193 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.272210 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:03.272221 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:03.272234 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:03.330001 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:03.330036 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:03.347681 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:03.347716 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:03.430544 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:03.421427   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.422228   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.424046   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.424538   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.426050   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:03.421427   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.422228   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.424046   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.424538   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.426050   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:03.430566 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:03.430581 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:03.457512 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:03.457552 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:05.988326 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:06.000598 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:06.000678 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:06.036782 1542350 cri.go:89] found id: ""
	I1213 16:17:06.036859 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.036876 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:06.036891 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:06.036960 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:06.066595 1542350 cri.go:89] found id: ""
	I1213 16:17:06.066623 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.066633 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:06.066640 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:06.066705 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:06.095017 1542350 cri.go:89] found id: ""
	I1213 16:17:06.095047 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.095057 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:06.095064 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:06.095146 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:06.123113 1542350 cri.go:89] found id: ""
	I1213 16:17:06.123140 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.123150 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:06.123156 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:06.123223 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:06.150821 1542350 cri.go:89] found id: ""
	I1213 16:17:06.150847 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.150856 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:06.150862 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:06.150925 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:06.176578 1542350 cri.go:89] found id: ""
	I1213 16:17:06.176608 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.176616 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:06.176623 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:06.176690 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:06.207351 1542350 cri.go:89] found id: ""
	I1213 16:17:06.207387 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.207397 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:06.207404 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:06.207468 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:06.233849 1542350 cri.go:89] found id: ""
	I1213 16:17:06.233872 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.233881 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:06.233890 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:06.233907 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:06.250685 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:06.250716 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:06.319519 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:06.310314   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.310885   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.312626   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.313247   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.315150   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:06.310314   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.310885   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.312626   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.313247   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.315150   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:06.319544 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:06.319566 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:06.346128 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:06.346163 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:06.386358 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:06.386439 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:08.950033 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:08.960761 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:08.960908 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:08.984689 1542350 cri.go:89] found id: ""
	I1213 16:17:08.984727 1542350 logs.go:282] 0 containers: []
	W1213 16:17:08.984737 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:08.984760 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:08.984839 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:09.014786 1542350 cri.go:89] found id: ""
	I1213 16:17:09.014811 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.014820 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:09.014826 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:09.014890 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:09.044222 1542350 cri.go:89] found id: ""
	I1213 16:17:09.044257 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.044267 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:09.044276 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:09.044344 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:09.077612 1542350 cri.go:89] found id: ""
	I1213 16:17:09.077685 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.077708 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:09.077726 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:09.077815 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:09.105512 1542350 cri.go:89] found id: ""
	I1213 16:17:09.105535 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.105545 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:09.105551 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:09.105617 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:09.129780 1542350 cri.go:89] found id: ""
	I1213 16:17:09.129803 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.129811 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:09.129817 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:09.129878 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:09.154967 1542350 cri.go:89] found id: ""
	I1213 16:17:09.154993 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.155002 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:09.155009 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:09.155076 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:09.179699 1542350 cri.go:89] found id: ""
	I1213 16:17:09.179763 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.179789 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:09.179806 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:09.179817 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:09.235549 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:09.235580 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:09.251403 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:09.251431 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:09.319531 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:09.311333   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.311986   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.313546   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.314046   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.315495   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:09.311333   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.311986   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.313546   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.314046   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.315495   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:09.319549 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:09.319561 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:09.346608 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:09.346650 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:11.878089 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:11.889358 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:11.889432 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:11.915293 1542350 cri.go:89] found id: ""
	I1213 16:17:11.915330 1542350 logs.go:282] 0 containers: []
	W1213 16:17:11.915339 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:11.915346 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:11.915408 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:11.945256 1542350 cri.go:89] found id: ""
	I1213 16:17:11.945334 1542350 logs.go:282] 0 containers: []
	W1213 16:17:11.945359 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:11.945374 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:11.945452 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:11.969767 1542350 cri.go:89] found id: ""
	I1213 16:17:11.969794 1542350 logs.go:282] 0 containers: []
	W1213 16:17:11.969803 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:11.969809 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:11.969871 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:11.993969 1542350 cri.go:89] found id: ""
	I1213 16:17:11.993996 1542350 logs.go:282] 0 containers: []
	W1213 16:17:11.994005 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:11.994011 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:11.994089 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:12.029493 1542350 cri.go:89] found id: ""
	I1213 16:17:12.029521 1542350 logs.go:282] 0 containers: []
	W1213 16:17:12.029531 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:12.029543 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:12.029608 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:12.059180 1542350 cri.go:89] found id: ""
	I1213 16:17:12.059208 1542350 logs.go:282] 0 containers: []
	W1213 16:17:12.059217 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:12.059223 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:12.059283 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:12.087232 1542350 cri.go:89] found id: ""
	I1213 16:17:12.087261 1542350 logs.go:282] 0 containers: []
	W1213 16:17:12.087270 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:12.087276 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:12.087371 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:12.112813 1542350 cri.go:89] found id: ""
	I1213 16:17:12.112835 1542350 logs.go:282] 0 containers: []
	W1213 16:17:12.112844 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:12.112853 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:12.112864 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:12.138376 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:12.138408 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:12.166357 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:12.166387 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:12.222375 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:12.222410 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:12.239215 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:12.239247 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:12.308445 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:12.300806   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.301332   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.302968   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.303390   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.304404   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:12.300806   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.301332   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.302968   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.303390   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.304404   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:14.808692 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:14.819373 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:14.819444 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:14.852674 1542350 cri.go:89] found id: ""
	I1213 16:17:14.852703 1542350 logs.go:282] 0 containers: []
	W1213 16:17:14.852712 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:14.852728 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:14.852788 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:14.883668 1542350 cri.go:89] found id: ""
	I1213 16:17:14.883695 1542350 logs.go:282] 0 containers: []
	W1213 16:17:14.883704 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:14.883710 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:14.883767 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:14.911607 1542350 cri.go:89] found id: ""
	I1213 16:17:14.911630 1542350 logs.go:282] 0 containers: []
	W1213 16:17:14.911638 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:14.911644 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:14.911706 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:14.936933 1542350 cri.go:89] found id: ""
	I1213 16:17:14.936960 1542350 logs.go:282] 0 containers: []
	W1213 16:17:14.936970 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:14.936977 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:14.937035 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:14.962547 1542350 cri.go:89] found id: ""
	I1213 16:17:14.962570 1542350 logs.go:282] 0 containers: []
	W1213 16:17:14.962580 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:14.962586 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:14.962689 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:14.986795 1542350 cri.go:89] found id: ""
	I1213 16:17:14.986820 1542350 logs.go:282] 0 containers: []
	W1213 16:17:14.986836 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:14.986843 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:14.986903 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:15.033107 1542350 cri.go:89] found id: ""
	I1213 16:17:15.033185 1542350 logs.go:282] 0 containers: []
	W1213 16:17:15.033224 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:15.033257 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:15.033365 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:15.061981 1542350 cri.go:89] found id: ""
	I1213 16:17:15.062060 1542350 logs.go:282] 0 containers: []
	W1213 16:17:15.062093 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:15.062116 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:15.062143 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:15.118734 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:15.118772 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:15.135655 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:15.135685 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:15.203637 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:15.195493   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.196273   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.197861   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.198155   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.199758   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:15.195493   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.196273   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.197861   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.198155   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.199758   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:15.203658 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:15.203670 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:15.229691 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:15.229730 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:17.757141 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:17.767810 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:17.767883 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:17.795906 1542350 cri.go:89] found id: ""
	I1213 16:17:17.795930 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.795939 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:17.795945 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:17.796011 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:17.820499 1542350 cri.go:89] found id: ""
	I1213 16:17:17.820525 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.820534 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:17.820540 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:17.820597 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:17.852893 1542350 cri.go:89] found id: ""
	I1213 16:17:17.852922 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.852931 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:17.852936 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:17.852998 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:17.882522 1542350 cri.go:89] found id: ""
	I1213 16:17:17.882550 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.882559 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:17.882567 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:17.882625 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:17.910091 1542350 cri.go:89] found id: ""
	I1213 16:17:17.910119 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.910128 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:17.910133 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:17.910194 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:17.934842 1542350 cri.go:89] found id: ""
	I1213 16:17:17.934877 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.934886 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:17.934892 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:17.934957 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:17.959436 1542350 cri.go:89] found id: ""
	I1213 16:17:17.959470 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.959480 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:17.959491 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:17.959563 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:17.984392 1542350 cri.go:89] found id: ""
	I1213 16:17:17.984422 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.984431 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:17.984440 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:17.984452 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:18.039527 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:18.039566 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:18.055611 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:18.055637 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:18.119895 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:18.111397   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.112071   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.113661   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.114180   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.115834   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:18.111397   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.112071   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.113661   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.114180   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.115834   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:18.119920 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:18.119935 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:18.145247 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:18.145282 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:20.679491 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:20.690101 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:20.690172 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:20.715727 1542350 cri.go:89] found id: ""
	I1213 16:17:20.715753 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.715770 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:20.715780 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:20.715849 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:20.743470 1542350 cri.go:89] found id: ""
	I1213 16:17:20.743496 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.743504 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:20.743511 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:20.743570 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:20.768457 1542350 cri.go:89] found id: ""
	I1213 16:17:20.768480 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.768496 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:20.768503 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:20.768561 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:20.792618 1542350 cri.go:89] found id: ""
	I1213 16:17:20.792644 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.792653 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:20.792660 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:20.792718 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:20.817055 1542350 cri.go:89] found id: ""
	I1213 16:17:20.817077 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.817087 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:20.817093 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:20.817155 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:20.847328 1542350 cri.go:89] found id: ""
	I1213 16:17:20.847351 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.847360 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:20.847366 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:20.847428 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:20.885859 1542350 cri.go:89] found id: ""
	I1213 16:17:20.885882 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.885891 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:20.885898 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:20.885956 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:20.915753 1542350 cri.go:89] found id: ""
	I1213 16:17:20.915784 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.915794 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:20.915803 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:20.915815 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:20.970894 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:20.970934 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:20.986885 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:20.986910 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:21.055027 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:21.046462   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.046998   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.048753   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.049334   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.051047   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:21.046462   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.046998   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.048753   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.049334   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.051047   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:21.055049 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:21.055062 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:21.079833 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:21.079866 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:23.608166 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:23.619347 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:23.619414 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:23.649699 1542350 cri.go:89] found id: ""
	I1213 16:17:23.649721 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.649729 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:23.649736 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:23.649795 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:23.675224 1542350 cri.go:89] found id: ""
	I1213 16:17:23.675246 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.675255 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:23.675261 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:23.675349 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:23.700895 1542350 cri.go:89] found id: ""
	I1213 16:17:23.700918 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.700927 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:23.700933 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:23.700996 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:23.729110 1542350 cri.go:89] found id: ""
	I1213 16:17:23.729176 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.729191 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:23.729198 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:23.729257 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:23.753661 1542350 cri.go:89] found id: ""
	I1213 16:17:23.753688 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.753697 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:23.753703 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:23.753774 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:23.778169 1542350 cri.go:89] found id: ""
	I1213 16:17:23.778217 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.778227 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:23.778234 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:23.778301 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:23.802589 1542350 cri.go:89] found id: ""
	I1213 16:17:23.802622 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.802631 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:23.802637 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:23.802708 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:23.832514 1542350 cri.go:89] found id: ""
	I1213 16:17:23.832548 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.832558 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:23.832569 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:23.832582 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:23.917876 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:23.909157   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.909608   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.910836   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.911543   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.913354   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:23.909157   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.909608   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.910836   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.911543   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.913354   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:23.917899 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:23.917918 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:23.943509 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:23.943548 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:23.971452 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:23.971478 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:24.027358 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:24.027396 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:26.545810 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:26.556391 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:26.556463 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:26.580187 1542350 cri.go:89] found id: ""
	I1213 16:17:26.580210 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.580219 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:26.580239 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:26.580300 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:26.608397 1542350 cri.go:89] found id: ""
	I1213 16:17:26.608420 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.608429 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:26.608435 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:26.608496 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:26.636638 1542350 cri.go:89] found id: ""
	I1213 16:17:26.636661 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.636669 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:26.636675 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:26.636734 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:26.665248 1542350 cri.go:89] found id: ""
	I1213 16:17:26.665274 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.665283 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:26.665289 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:26.665365 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:26.695808 1542350 cri.go:89] found id: ""
	I1213 16:17:26.695835 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.695854 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:26.695861 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:26.695918 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:26.721653 1542350 cri.go:89] found id: ""
	I1213 16:17:26.721678 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.721687 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:26.721693 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:26.721751 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:26.750218 1542350 cri.go:89] found id: ""
	I1213 16:17:26.750241 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.750250 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:26.750256 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:26.750313 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:26.777036 1542350 cri.go:89] found id: ""
	I1213 16:17:26.777059 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.777068 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:26.777077 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:26.777088 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:26.833887 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:26.833929 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:26.851275 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:26.851303 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:26.934951 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:26.926741   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.927535   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.929160   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.929458   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.930955   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:26.926741   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.927535   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.929160   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.929458   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.930955   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:26.934973 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:26.934985 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:26.960388 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:26.960424 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:29.488577 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:29.499475 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:29.499551 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:29.524176 1542350 cri.go:89] found id: ""
	I1213 16:17:29.524202 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.524212 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:29.524219 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:29.524281 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:29.558368 1542350 cri.go:89] found id: ""
	I1213 16:17:29.558393 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.558408 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:29.558415 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:29.558504 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:29.589170 1542350 cri.go:89] found id: ""
	I1213 16:17:29.589197 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.589206 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:29.589212 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:29.589273 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:29.621623 1542350 cri.go:89] found id: ""
	I1213 16:17:29.621697 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.621722 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:29.621741 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:29.621828 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:29.651459 1542350 cri.go:89] found id: ""
	I1213 16:17:29.651534 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.651557 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:29.651584 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:29.651712 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:29.676637 1542350 cri.go:89] found id: ""
	I1213 16:17:29.676663 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.676673 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:29.676679 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:29.676752 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:29.701821 1542350 cri.go:89] found id: ""
	I1213 16:17:29.701845 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.701855 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:29.701861 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:29.701920 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:29.726528 1542350 cri.go:89] found id: ""
	I1213 16:17:29.726555 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.726564 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:29.726574 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:29.726585 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:29.781999 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:29.782035 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:29.798088 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:29.798116 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:29.881323 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:29.870998   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.871883   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.873746   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.874663   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.876377   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:29.870998   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.871883   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.873746   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.874663   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.876377   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:29.881348 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:29.881361 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:29.911425 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:29.911464 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:32.442588 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:32.453594 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:32.453664 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:32.479865 1542350 cri.go:89] found id: ""
	I1213 16:17:32.479893 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.479902 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:32.479909 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:32.479975 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:32.505131 1542350 cri.go:89] found id: ""
	I1213 16:17:32.505159 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.505168 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:32.505175 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:32.505239 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:32.529697 1542350 cri.go:89] found id: ""
	I1213 16:17:32.529723 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.529732 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:32.529738 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:32.529796 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:32.554812 1542350 cri.go:89] found id: ""
	I1213 16:17:32.554834 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.554850 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:32.554856 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:32.554915 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:32.582244 1542350 cri.go:89] found id: ""
	I1213 16:17:32.582270 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.582279 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:32.582286 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:32.582347 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:32.613711 1542350 cri.go:89] found id: ""
	I1213 16:17:32.613738 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.613747 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:32.613754 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:32.613818 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:32.642070 1542350 cri.go:89] found id: ""
	I1213 16:17:32.642097 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.642106 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:32.642112 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:32.642168 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:32.667382 1542350 cri.go:89] found id: ""
	I1213 16:17:32.667406 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.667415 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:32.667424 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:32.667436 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:32.683777 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:32.683808 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:32.750802 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:32.740355   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.741255   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.742960   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.743298   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.746651   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:32.740355   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.741255   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.742960   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.743298   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.746651   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:32.750824 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:32.750838 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:32.776516 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:32.776551 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:32.809331 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:32.809358 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:35.374938 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:35.387203 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:35.387276 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:35.412099 1542350 cri.go:89] found id: ""
	I1213 16:17:35.412124 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.412133 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:35.412139 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:35.412195 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:35.436994 1542350 cri.go:89] found id: ""
	I1213 16:17:35.437031 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.437040 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:35.437047 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:35.437115 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:35.461531 1542350 cri.go:89] found id: ""
	I1213 16:17:35.461554 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.461562 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:35.461568 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:35.461627 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:35.486070 1542350 cri.go:89] found id: ""
	I1213 16:17:35.486095 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.486105 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:35.486118 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:35.486176 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:35.515476 1542350 cri.go:89] found id: ""
	I1213 16:17:35.515501 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.515510 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:35.515516 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:35.515576 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:35.545886 1542350 cri.go:89] found id: ""
	I1213 16:17:35.545959 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.545995 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:35.546020 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:35.546110 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:35.575465 1542350 cri.go:89] found id: ""
	I1213 16:17:35.575489 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.575498 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:35.575504 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:35.575563 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:35.607235 1542350 cri.go:89] found id: ""
	I1213 16:17:35.607264 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.607273 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:35.607282 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:35.607294 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:35.671811 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:35.671850 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:35.687939 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:35.687972 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:35.751714 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:35.742668   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.743226   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.744917   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.745517   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.747275   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:35.742668   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.743226   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.744917   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.745517   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.747275   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:35.751733 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:35.751746 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:35.777517 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:35.777554 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:38.308841 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:38.319569 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:38.319645 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:38.344249 1542350 cri.go:89] found id: ""
	I1213 16:17:38.344276 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.344285 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:38.344291 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:38.344349 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:38.368637 1542350 cri.go:89] found id: ""
	I1213 16:17:38.368666 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.368676 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:38.368682 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:38.368746 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:38.397310 1542350 cri.go:89] found id: ""
	I1213 16:17:38.397335 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.397344 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:38.397350 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:38.397409 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:38.426892 1542350 cri.go:89] found id: ""
	I1213 16:17:38.426967 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.426989 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:38.427008 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:38.427091 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:38.451400 1542350 cri.go:89] found id: ""
	I1213 16:17:38.451423 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.451432 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:38.451437 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:38.451500 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:38.476411 1542350 cri.go:89] found id: ""
	I1213 16:17:38.476433 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.476441 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:38.476448 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:38.476506 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:38.502060 1542350 cri.go:89] found id: ""
	I1213 16:17:38.502083 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.502092 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:38.502098 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:38.502158 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:38.527156 1542350 cri.go:89] found id: ""
	I1213 16:17:38.527217 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.527240 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:38.527264 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:38.527289 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:38.583123 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:38.583161 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:38.606934 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:38.607014 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:38.678774 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:38.669944   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.670742   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.672406   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.672726   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.674153   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:38.669944   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.670742   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.672406   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.672726   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.674153   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:38.678794 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:38.678806 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:38.703623 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:38.703656 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:41.235499 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:41.246098 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:41.246199 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:41.272817 1542350 cri.go:89] found id: ""
	I1213 16:17:41.272884 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.272907 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:41.272921 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:41.272995 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:41.297573 1542350 cri.go:89] found id: ""
	I1213 16:17:41.297599 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.297608 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:41.297614 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:41.297722 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:41.325595 1542350 cri.go:89] found id: ""
	I1213 16:17:41.325663 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.325695 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:41.325708 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:41.325784 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:41.350495 1542350 cri.go:89] found id: ""
	I1213 16:17:41.350519 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.350528 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:41.350534 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:41.350593 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:41.374833 1542350 cri.go:89] found id: ""
	I1213 16:17:41.374860 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.374869 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:41.374874 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:41.374931 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:41.400881 1542350 cri.go:89] found id: ""
	I1213 16:17:41.400911 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.400920 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:41.400926 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:41.400983 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:41.425159 1542350 cri.go:89] found id: ""
	I1213 16:17:41.425182 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.425191 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:41.425197 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:41.425255 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:41.449690 1542350 cri.go:89] found id: ""
	I1213 16:17:41.449765 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.449788 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:41.449808 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:41.449845 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:41.465414 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:41.465441 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:41.531758 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:41.522350   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.523854   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.524927   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.526433   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.526949   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:41.522350   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.523854   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.524927   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.526433   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.526949   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:41.531782 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:41.531795 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:41.557072 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:41.557104 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:41.589367 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:41.589397 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:44.161155 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:44.173267 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:44.173342 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:44.202655 1542350 cri.go:89] found id: ""
	I1213 16:17:44.202682 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.202692 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:44.202699 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:44.202758 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:44.227871 1542350 cri.go:89] found id: ""
	I1213 16:17:44.227897 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.227905 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:44.227911 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:44.227972 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:44.253446 1542350 cri.go:89] found id: ""
	I1213 16:17:44.253473 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.253481 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:44.253487 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:44.253543 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:44.279358 1542350 cri.go:89] found id: ""
	I1213 16:17:44.279383 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.279392 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:44.279398 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:44.279464 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:44.303249 1542350 cri.go:89] found id: ""
	I1213 16:17:44.303275 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.303284 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:44.303344 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:44.303410 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:44.327446 1542350 cri.go:89] found id: ""
	I1213 16:17:44.327471 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.327480 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:44.327486 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:44.327546 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:44.353767 1542350 cri.go:89] found id: ""
	I1213 16:17:44.353793 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.353802 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:44.353808 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:44.353865 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:44.382033 1542350 cri.go:89] found id: ""
	I1213 16:17:44.382060 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.382068 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:44.382078 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:44.382089 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:44.436599 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:44.436634 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:44.452268 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:44.452298 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:44.515099 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:44.507045   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.507744   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.509208   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.509703   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.511153   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:44.507045   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.507744   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.509208   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.509703   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.511153   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:44.515122 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:44.515134 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:44.540023 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:44.540059 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:47.069691 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:47.080543 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:47.080615 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:47.114986 1542350 cri.go:89] found id: ""
	I1213 16:17:47.115062 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.115085 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:47.115103 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:47.115194 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:47.148767 1542350 cri.go:89] found id: ""
	I1213 16:17:47.148840 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.148850 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:47.148857 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:47.148931 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:47.174407 1542350 cri.go:89] found id: ""
	I1213 16:17:47.174436 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.174445 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:47.174452 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:47.175791 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:47.207990 1542350 cri.go:89] found id: ""
	I1213 16:17:47.208024 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.208034 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:47.208041 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:47.208115 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:47.232910 1542350 cri.go:89] found id: ""
	I1213 16:17:47.232938 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.232947 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:47.232953 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:47.233015 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:47.256927 1542350 cri.go:89] found id: ""
	I1213 16:17:47.256952 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.256961 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:47.256967 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:47.257049 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:47.285254 1542350 cri.go:89] found id: ""
	I1213 16:17:47.285281 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.285290 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:47.285296 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:47.285356 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:47.309997 1542350 cri.go:89] found id: ""
	I1213 16:17:47.310027 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.310037 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:47.310046 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:47.310060 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:47.326038 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:47.326073 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:47.390775 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:47.383176   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.383650   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.385126   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.385506   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.386933   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:47.383176   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.383650   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.385126   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.385506   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.386933   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:47.390796 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:47.390809 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:47.415331 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:47.415362 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:47.442477 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:47.442503 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:50.000902 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:50.015948 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:50.016030 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:50.046794 1542350 cri.go:89] found id: ""
	I1213 16:17:50.046819 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.046827 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:50.046834 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:50.046890 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:50.073072 1542350 cri.go:89] found id: ""
	I1213 16:17:50.073106 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.073116 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:50.073124 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:50.073186 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:50.111358 1542350 cri.go:89] found id: ""
	I1213 16:17:50.111384 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.111393 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:50.111403 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:50.111468 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:50.141482 1542350 cri.go:89] found id: ""
	I1213 16:17:50.141510 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.141519 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:50.141525 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:50.141584 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:50.168684 1542350 cri.go:89] found id: ""
	I1213 16:17:50.168711 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.168720 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:50.168727 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:50.168806 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:50.194609 1542350 cri.go:89] found id: ""
	I1213 16:17:50.194633 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.194642 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:50.194648 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:50.194708 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:50.220707 1542350 cri.go:89] found id: ""
	I1213 16:17:50.220732 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.220741 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:50.220746 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:50.220810 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:50.245930 1542350 cri.go:89] found id: ""
	I1213 16:17:50.245956 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.245965 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:50.245975 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:50.245987 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:50.301111 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:50.301147 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:50.317024 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:50.317051 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:50.379354 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:50.370376   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.371063   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.372767   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.373333   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.375036   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:50.370376   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.371063   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.372767   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.373333   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.375036   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:50.379375 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:50.379388 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:50.403891 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:50.403925 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:52.933071 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:52.944075 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:52.944148 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:52.969292 1542350 cri.go:89] found id: ""
	I1213 16:17:52.969318 1542350 logs.go:282] 0 containers: []
	W1213 16:17:52.969327 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:52.969333 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:52.969393 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:52.997688 1542350 cri.go:89] found id: ""
	I1213 16:17:52.997717 1542350 logs.go:282] 0 containers: []
	W1213 16:17:52.997727 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:52.997733 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:52.997795 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:53.024102 1542350 cri.go:89] found id: ""
	I1213 16:17:53.024134 1542350 logs.go:282] 0 containers: []
	W1213 16:17:53.024144 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:53.024150 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:53.024214 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:53.054126 1542350 cri.go:89] found id: ""
	I1213 16:17:53.054149 1542350 logs.go:282] 0 containers: []
	W1213 16:17:53.054159 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:53.054165 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:53.054227 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:53.078840 1542350 cri.go:89] found id: ""
	I1213 16:17:53.078918 1542350 logs.go:282] 0 containers: []
	W1213 16:17:53.078940 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:53.078958 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:53.079041 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:53.134282 1542350 cri.go:89] found id: ""
	I1213 16:17:53.134313 1542350 logs.go:282] 0 containers: []
	W1213 16:17:53.134326 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:53.134332 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:53.134401 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:53.170263 1542350 cri.go:89] found id: ""
	I1213 16:17:53.170287 1542350 logs.go:282] 0 containers: []
	W1213 16:17:53.170296 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:53.170302 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:53.170366 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:53.195555 1542350 cri.go:89] found id: ""
	I1213 16:17:53.195578 1542350 logs.go:282] 0 containers: []
	W1213 16:17:53.195587 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:53.195596 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:53.195612 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:53.221475 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:53.221510 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:53.256145 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:53.256172 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:53.312142 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:53.312178 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:53.328755 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:53.328784 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:53.392981 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:53.384949   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.385331   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.386801   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.387094   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.388517   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:53.384949   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.385331   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.386801   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.387094   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.388517   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:55.894678 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:55.905837 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:55.905910 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:55.931137 1542350 cri.go:89] found id: ""
	I1213 16:17:55.931159 1542350 logs.go:282] 0 containers: []
	W1213 16:17:55.931168 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:55.931175 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:55.931236 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:55.955775 1542350 cri.go:89] found id: ""
	I1213 16:17:55.955801 1542350 logs.go:282] 0 containers: []
	W1213 16:17:55.955810 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:55.955817 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:55.955877 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:55.981227 1542350 cri.go:89] found id: ""
	I1213 16:17:55.981253 1542350 logs.go:282] 0 containers: []
	W1213 16:17:55.981262 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:55.981268 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:55.981329 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:56.008866 1542350 cri.go:89] found id: ""
	I1213 16:17:56.008892 1542350 logs.go:282] 0 containers: []
	W1213 16:17:56.008902 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:56.008909 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:56.008975 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:56.035606 1542350 cri.go:89] found id: ""
	I1213 16:17:56.035635 1542350 logs.go:282] 0 containers: []
	W1213 16:17:56.035644 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:56.035650 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:56.035712 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:56.061753 1542350 cri.go:89] found id: ""
	I1213 16:17:56.061780 1542350 logs.go:282] 0 containers: []
	W1213 16:17:56.061789 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:56.061795 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:56.061858 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:56.099036 1542350 cri.go:89] found id: ""
	I1213 16:17:56.099065 1542350 logs.go:282] 0 containers: []
	W1213 16:17:56.099074 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:56.099081 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:56.099142 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:56.133464 1542350 cri.go:89] found id: ""
	I1213 16:17:56.133491 1542350 logs.go:282] 0 containers: []
	W1213 16:17:56.133500 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:56.133510 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:56.133522 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:56.155287 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:56.155412 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:56.223561 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:56.214782   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.215583   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.217326   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.217944   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.219582   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:56.214782   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.215583   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.217326   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.217944   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.219582   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:56.223629 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:56.223650 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:56.249923 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:56.249965 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:56.280662 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:56.280692 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:58.836837 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:58.848594 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:58.848659 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:58.881904 1542350 cri.go:89] found id: ""
	I1213 16:17:58.881927 1542350 logs.go:282] 0 containers: []
	W1213 16:17:58.881935 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:58.881941 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:58.882001 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:58.917932 1542350 cri.go:89] found id: ""
	I1213 16:17:58.917954 1542350 logs.go:282] 0 containers: []
	W1213 16:17:58.917963 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:58.917969 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:58.918028 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:58.945580 1542350 cri.go:89] found id: ""
	I1213 16:17:58.945653 1542350 logs.go:282] 0 containers: []
	W1213 16:17:58.945668 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:58.945676 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:58.945753 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:58.971398 1542350 cri.go:89] found id: ""
	I1213 16:17:58.971424 1542350 logs.go:282] 0 containers: []
	W1213 16:17:58.971434 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:58.971440 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:58.971503 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:59.001302 1542350 cri.go:89] found id: ""
	I1213 16:17:59.001329 1542350 logs.go:282] 0 containers: []
	W1213 16:17:59.001339 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:59.001345 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:59.001409 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:59.028353 1542350 cri.go:89] found id: ""
	I1213 16:17:59.028379 1542350 logs.go:282] 0 containers: []
	W1213 16:17:59.028388 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:59.028394 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:59.028470 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:59.052548 1542350 cri.go:89] found id: ""
	I1213 16:17:59.052577 1542350 logs.go:282] 0 containers: []
	W1213 16:17:59.052586 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:59.052593 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:59.052653 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:59.077515 1542350 cri.go:89] found id: ""
	I1213 16:17:59.077541 1542350 logs.go:282] 0 containers: []
	W1213 16:17:59.077550 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:59.077560 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:59.077571 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:59.141173 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:59.141249 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:59.158291 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:59.158371 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:59.225799 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:59.216416   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.217255   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.219459   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.220025   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.221754   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:59.216416   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.217255   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.219459   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.220025   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.221754   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:59.225867 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:59.225890 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:59.251561 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:59.251597 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:01.784053 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:01.795325 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:01.795393 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:01.819579 1542350 cri.go:89] found id: ""
	I1213 16:18:01.819605 1542350 logs.go:282] 0 containers: []
	W1213 16:18:01.819615 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:01.819622 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:01.819683 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:01.857561 1542350 cri.go:89] found id: ""
	I1213 16:18:01.857588 1542350 logs.go:282] 0 containers: []
	W1213 16:18:01.857597 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:01.857604 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:01.857668 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:01.893605 1542350 cri.go:89] found id: ""
	I1213 16:18:01.893633 1542350 logs.go:282] 0 containers: []
	W1213 16:18:01.893642 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:01.893650 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:01.893706 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:01.931676 1542350 cri.go:89] found id: ""
	I1213 16:18:01.931783 1542350 logs.go:282] 0 containers: []
	W1213 16:18:01.931803 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:01.931812 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:01.931935 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:01.959175 1542350 cri.go:89] found id: ""
	I1213 16:18:01.959249 1542350 logs.go:282] 0 containers: []
	W1213 16:18:01.959272 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:01.959292 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:01.959398 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:01.984753 1542350 cri.go:89] found id: ""
	I1213 16:18:01.984784 1542350 logs.go:282] 0 containers: []
	W1213 16:18:01.984794 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:01.984800 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:01.984865 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:02.016830 1542350 cri.go:89] found id: ""
	I1213 16:18:02.016860 1542350 logs.go:282] 0 containers: []
	W1213 16:18:02.016870 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:02.016876 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:02.016939 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:02.042747 1542350 cri.go:89] found id: ""
	I1213 16:18:02.042776 1542350 logs.go:282] 0 containers: []
	W1213 16:18:02.042785 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:02.042794 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:02.042805 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:02.101057 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:02.101093 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:02.118948 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:02.118972 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:02.188051 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:02.179066   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.180018   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.180758   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.182317   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.182771   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:02.179066   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.180018   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.180758   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.182317   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.182771   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:02.188077 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:02.188091 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:02.214276 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:02.214316 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:04.742630 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:04.753656 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:04.753725 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:04.779281 1542350 cri.go:89] found id: ""
	I1213 16:18:04.779338 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.779349 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:04.779355 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:04.779418 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:04.806060 1542350 cri.go:89] found id: ""
	I1213 16:18:04.806099 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.806108 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:04.806114 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:04.806195 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:04.831390 1542350 cri.go:89] found id: ""
	I1213 16:18:04.831416 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.831425 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:04.831432 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:04.831501 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:04.865636 1542350 cri.go:89] found id: ""
	I1213 16:18:04.865663 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.865673 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:04.865680 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:04.865746 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:04.893812 1542350 cri.go:89] found id: ""
	I1213 16:18:04.893836 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.893845 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:04.893851 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:04.893916 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:04.922033 1542350 cri.go:89] found id: ""
	I1213 16:18:04.922062 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.922071 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:04.922077 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:04.922135 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:04.952026 1542350 cri.go:89] found id: ""
	I1213 16:18:04.952052 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.952061 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:04.952068 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:04.952129 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:04.979878 1542350 cri.go:89] found id: ""
	I1213 16:18:04.979901 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.979910 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:04.979919 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:04.979931 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:05.038448 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:05.038485 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:05.055056 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:05.055086 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:05.138791 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:05.128391   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.129071   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.130643   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.131070   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.134791   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:05.128391   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.129071   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.130643   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.131070   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.134791   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:05.138815 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:05.138828 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:05.170511 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:05.170549 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:07.701516 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:07.711811 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:07.711881 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:07.737115 1542350 cri.go:89] found id: ""
	I1213 16:18:07.737139 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.737148 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:07.737154 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:07.737216 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:07.761282 1542350 cri.go:89] found id: ""
	I1213 16:18:07.761305 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.761313 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:07.761319 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:07.761375 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:07.788777 1542350 cri.go:89] found id: ""
	I1213 16:18:07.788804 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.788813 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:07.788828 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:07.788893 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:07.813606 1542350 cri.go:89] found id: ""
	I1213 16:18:07.813633 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.813642 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:07.813650 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:07.813762 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:07.846070 1542350 cri.go:89] found id: ""
	I1213 16:18:07.846100 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.846109 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:07.846115 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:07.846178 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:07.877868 1542350 cri.go:89] found id: ""
	I1213 16:18:07.877894 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.877903 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:07.877909 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:07.877978 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:07.906297 1542350 cri.go:89] found id: ""
	I1213 16:18:07.906322 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.906331 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:07.906337 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:07.906411 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:07.935165 1542350 cri.go:89] found id: ""
	I1213 16:18:07.935191 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.935200 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:07.935209 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:07.935221 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:07.990632 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:07.990666 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:08.006620 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:08.006668 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:08.074292 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:08.065222   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.066181   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.067860   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.068200   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.069739   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:08.065222   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.066181   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.067860   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.068200   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.069739   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:08.074313 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:08.074338 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:08.103200 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:08.103236 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:10.643571 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:10.654051 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:10.654120 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:10.678184 1542350 cri.go:89] found id: ""
	I1213 16:18:10.678213 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.678222 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:10.678229 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:10.678286 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:10.714102 1542350 cri.go:89] found id: ""
	I1213 16:18:10.714129 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.714137 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:10.714143 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:10.714204 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:10.738091 1542350 cri.go:89] found id: ""
	I1213 16:18:10.738114 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.738123 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:10.738129 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:10.738187 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:10.762969 1542350 cri.go:89] found id: ""
	I1213 16:18:10.762996 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.763005 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:10.763010 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:10.763068 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:10.788695 1542350 cri.go:89] found id: ""
	I1213 16:18:10.788718 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.788726 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:10.788732 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:10.788790 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:10.813304 1542350 cri.go:89] found id: ""
	I1213 16:18:10.813331 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.813339 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:10.813346 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:10.813404 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:10.840988 1542350 cri.go:89] found id: ""
	I1213 16:18:10.841013 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.841022 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:10.841028 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:10.841085 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:10.872923 1542350 cri.go:89] found id: ""
	I1213 16:18:10.872947 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.872957 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:10.872966 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:10.872978 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:10.913313 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:10.913342 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:10.970044 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:10.970079 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:10.986369 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:10.986399 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:11.056440 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:11.047528   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.048477   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.050273   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.050853   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.052407   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:11.047528   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.048477   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.050273   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.050853   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.052407   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:11.056461 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:11.056474 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:13.582630 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:13.593495 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:13.593570 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:13.618406 1542350 cri.go:89] found id: ""
	I1213 16:18:13.618429 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.618438 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:13.618444 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:13.618503 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:13.643366 1542350 cri.go:89] found id: ""
	I1213 16:18:13.643392 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.643401 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:13.643407 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:13.643470 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:13.668878 1542350 cri.go:89] found id: ""
	I1213 16:18:13.668903 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.668912 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:13.668918 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:13.668976 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:13.694282 1542350 cri.go:89] found id: ""
	I1213 16:18:13.694309 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.694318 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:13.694324 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:13.694383 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:13.722288 1542350 cri.go:89] found id: ""
	I1213 16:18:13.722318 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.722326 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:13.722332 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:13.722391 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:13.749131 1542350 cri.go:89] found id: ""
	I1213 16:18:13.749156 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.749165 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:13.749177 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:13.749234 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:13.772877 1542350 cri.go:89] found id: ""
	I1213 16:18:13.772905 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.772915 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:13.772924 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:13.773024 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:13.797195 1542350 cri.go:89] found id: ""
	I1213 16:18:13.797222 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.797232 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:13.797241 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:13.797253 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:13.875404 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:13.862729   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.863769   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.867326   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.869105   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.869716   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:13.862729   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.863769   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.867326   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.869105   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.869716   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:13.875426 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:13.875439 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:13.907083 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:13.907122 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:13.940383 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:13.940412 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:13.999033 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:13.999073 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:16.517512 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:16.531616 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:16.531687 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:16.555921 1542350 cri.go:89] found id: ""
	I1213 16:18:16.555944 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.555952 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:16.555958 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:16.556017 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:16.585501 1542350 cri.go:89] found id: ""
	I1213 16:18:16.585523 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.585532 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:16.585538 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:16.585597 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:16.609776 1542350 cri.go:89] found id: ""
	I1213 16:18:16.609800 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.609810 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:16.609815 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:16.609874 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:16.633727 1542350 cri.go:89] found id: ""
	I1213 16:18:16.633801 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.633828 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:16.633847 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:16.633919 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:16.663010 1542350 cri.go:89] found id: ""
	I1213 16:18:16.663034 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.663042 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:16.663048 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:16.663104 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:16.689483 1542350 cri.go:89] found id: ""
	I1213 16:18:16.689506 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.689514 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:16.689521 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:16.689579 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:16.713920 1542350 cri.go:89] found id: ""
	I1213 16:18:16.713946 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.713955 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:16.713963 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:16.714023 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:16.739270 1542350 cri.go:89] found id: ""
	I1213 16:18:16.739297 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.739366 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:16.739377 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:16.739391 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:16.805237 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:16.796686   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.797427   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.799164   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.799670   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.801345   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:16.796686   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.797427   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.799164   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.799670   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.801345   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:16.805260 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:16.805272 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:16.830391 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:16.830421 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:16.875174 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:16.875203 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:16.940670 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:16.940707 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:19.457858 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:19.469305 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:19.469382 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:19.494702 1542350 cri.go:89] found id: ""
	I1213 16:18:19.494728 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.494739 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:19.494745 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:19.494805 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:19.526787 1542350 cri.go:89] found id: ""
	I1213 16:18:19.526811 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.526820 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:19.526826 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:19.526892 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:19.553929 1542350 cri.go:89] found id: ""
	I1213 16:18:19.553952 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.553961 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:19.553967 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:19.554025 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:19.578994 1542350 cri.go:89] found id: ""
	I1213 16:18:19.579021 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.579029 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:19.579036 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:19.579094 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:19.605160 1542350 cri.go:89] found id: ""
	I1213 16:18:19.605184 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.605202 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:19.605209 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:19.605271 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:19.629853 1542350 cri.go:89] found id: ""
	I1213 16:18:19.629880 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.629889 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:19.629896 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:19.629963 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:19.654551 1542350 cri.go:89] found id: ""
	I1213 16:18:19.654578 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.654588 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:19.654594 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:19.654674 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:19.679386 1542350 cri.go:89] found id: ""
	I1213 16:18:19.679410 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.679420 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:19.679429 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:19.679440 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:19.704792 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:19.704824 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:19.733848 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:19.733877 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:19.789321 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:19.789357 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:19.805414 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:19.805442 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:19.893754 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:19.884877   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.886081   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.886716   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.888200   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.888520   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:19.884877   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.886081   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.886716   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.888200   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.888520   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:22.394654 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:22.408580 1542350 out.go:203] 
	W1213 16:18:22.411606 1542350 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 16:18:22.411646 1542350 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 16:18:22.411657 1542350 out.go:285] * Related issues:
	W1213 16:18:22.411669 1542350 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1213 16:18:22.411682 1542350 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1213 16:18:22.414454 1542350 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.172900077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.172913106Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.172962434Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.172980173Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.172991151Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.173001884Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.173012173Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.173023233Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.173045772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.173088831Z" level=info msg="Connect containerd service"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.173368570Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.174111740Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.184422184Z" level=info msg="Start subscribing containerd event"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.184638121Z" level=info msg="Start recovering state"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.184605425Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.184847954Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.221873894Z" level=info msg="Start event monitor"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.221935570Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.221945818Z" level=info msg="Start streaming server"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.221955041Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.221964312Z" level=info msg="runtime interface starting up..."
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.221971163Z" level=info msg="starting plugins..."
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.222006157Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 16:12:20 newest-cni-526531 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.224181983Z" level=info msg="containerd successfully booted in 0.088659s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:31.748725   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:31.749366   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:31.750952   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:31.751480   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:31.753041   13765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.759368] overlayfs: idmapped layers are currently not supported
	[Dec13 13:37] overlayfs: idmapped layers are currently not supported
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 16:18:31 up  8:01,  0 user,  load average: 1.00, 0.76, 1.05
	Linux newest-cni-526531 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 16:18:27 newest-cni-526531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:18:27 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:18:27 newest-cni-526531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:18:29 newest-cni-526531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:18:29 newest-cni-526531 kubelet[13612]: E1213 16:18:29.297672   13612 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:18:29 newest-cni-526531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:18:29 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:18:30 newest-cni-526531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 13 16:18:30 newest-cni-526531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:18:30 newest-cni-526531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:18:30 newest-cni-526531 kubelet[13649]: E1213 16:18:30.159242   13649 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:18:30 newest-cni-526531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:18:30 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:18:30 newest-cni-526531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	Dec 13 16:18:30 newest-cni-526531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:18:30 newest-cni-526531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:18:30 newest-cni-526531 kubelet[13669]: E1213 16:18:30.899095   13669 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:18:30 newest-cni-526531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:18:30 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:18:31 newest-cni-526531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Dec 13 16:18:31 newest-cni-526531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:18:31 newest-cni-526531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:18:31 newest-cni-526531 kubelet[13739]: E1213 16:18:31.646146   13739 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:18:31 newest-cni-526531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:18:31 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-526531 -n newest-cni-526531
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-526531 -n newest-cni-526531: exit status 2 (372.844965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-526531" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-526531
helpers_test.go:244: (dbg) docker inspect newest-cni-526531:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54",
	        "Created": "2025-12-13T16:02:15.548035148Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1542480,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T16:12:14.158493479Z",
	            "FinishedAt": "2025-12-13T16:12:12.79865571Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54/hostname",
	        "HostsPath": "/var/lib/docker/containers/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54/hosts",
	        "LogPath": "/var/lib/docker/containers/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54/dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54-json.log",
	        "Name": "/newest-cni-526531",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-526531:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-526531",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dd2af60ccebf72215dd839f7b3cd5dab38a18287f5491071fb9a17f1e852ac54",
	                "LowerDir": "/var/lib/docker/overlay2/3024c4b0e975ebb8657bae012179f06255ab85cd84ba26121505b6c54622bc5d-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3024c4b0e975ebb8657bae012179f06255ab85cd84ba26121505b6c54622bc5d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3024c4b0e975ebb8657bae012179f06255ab85cd84ba26121505b6c54622bc5d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3024c4b0e975ebb8657bae012179f06255ab85cd84ba26121505b6c54622bc5d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-526531",
	                "Source": "/var/lib/docker/volumes/newest-cni-526531/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-526531",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-526531",
	                "name.minikube.sigs.k8s.io": "newest-cni-526531",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "57c40ce56d621d0f69c7bac6d3cb56a638b53bb82fd302b1930b9f51563e995b",
	            "SandboxKey": "/var/run/docker/netns/57c40ce56d62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34233"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34234"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34237"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34235"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34236"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-526531": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:43:0b:15:7e:21",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ae0d89b977ec0aa4cc17943d84decbf5f3cf47ff39573e4d4fdb9e9873e2828c",
	                    "EndpointID": "4d19fec2228064ef379084c28bbbd96c0fa36a4142ac70319780a70953fdc4e8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-526531",
	                        "dd2af60ccebf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-526531 -n newest-cni-526531
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-526531 -n newest-cni-526531: exit status 2 (339.117503ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-526531 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-526531 logs -n 25: (1.549440786s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p embed-certs-270324                                                                                                                                                                                                                                      │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p embed-certs-270324                                                                                                                                                                                                                                      │ embed-certs-270324           │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ delete  │ -p disable-driver-mounts-614298                                                                                                                                                                                                                            │ disable-driver-mounts-614298 │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 15:59 UTC │
	│ start   │ -p default-k8s-diff-port-946932 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 15:59 UTC │ 13 Dec 25 16:00 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-946932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:00 UTC │ 13 Dec 25 16:00 UTC │
	│ stop    │ -p default-k8s-diff-port-946932 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:00 UTC │ 13 Dec 25 16:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-946932 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:01 UTC │ 13 Dec 25 16:01 UTC │
	│ start   │ -p default-k8s-diff-port-946932 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:01 UTC │ 13 Dec 25 16:01 UTC │
	│ image   │ default-k8s-diff-port-946932 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ pause   │ -p default-k8s-diff-port-946932 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ unpause │ -p default-k8s-diff-port-946932 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ delete  │ -p default-k8s-diff-port-946932                                                                                                                                                                                                                            │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ delete  │ -p default-k8s-diff-port-946932                                                                                                                                                                                                                            │ default-k8s-diff-port-946932 │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │ 13 Dec 25 16:02 UTC │
	│ start   │ -p newest-cni-526531 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-439544 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:02 UTC │                     │
	│ stop    │ -p no-preload-439544 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:04 UTC │ 13 Dec 25 16:04 UTC │
	│ addons  │ enable dashboard -p no-preload-439544 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:04 UTC │ 13 Dec 25 16:04 UTC │
	│ start   │ -p no-preload-439544 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-439544            │ jenkins │ v1.37.0 │ 13 Dec 25 16:04 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-526531 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:10 UTC │                     │
	│ stop    │ -p newest-cni-526531 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:12 UTC │ 13 Dec 25 16:12 UTC │
	│ addons  │ enable dashboard -p newest-cni-526531 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:12 UTC │ 13 Dec 25 16:12 UTC │
	│ start   │ -p newest-cni-526531 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:12 UTC │                     │
	│ image   │ newest-cni-526531 image list --format=json                                                                                                                                                                                                                 │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:18 UTC │ 13 Dec 25 16:18 UTC │
	│ pause   │ -p newest-cni-526531 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:18 UTC │ 13 Dec 25 16:18 UTC │
	│ unpause │ -p newest-cni-526531 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-526531            │ jenkins │ v1.37.0 │ 13 Dec 25 16:18 UTC │ 13 Dec 25 16:18 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 16:12:13
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 16:12:13.872500 1542350 out.go:360] Setting OutFile to fd 1 ...
	I1213 16:12:13.872721 1542350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:12:13.872749 1542350 out.go:374] Setting ErrFile to fd 2...
	I1213 16:12:13.872769 1542350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:12:13.873083 1542350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 16:12:13.873513 1542350 out.go:368] Setting JSON to false
	I1213 16:12:13.874453 1542350 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":28483,"bootTime":1765613851,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 16:12:13.874604 1542350 start.go:143] virtualization:  
	I1213 16:12:13.877765 1542350 out.go:179] * [newest-cni-526531] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 16:12:13.881549 1542350 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 16:12:13.881619 1542350 notify.go:221] Checking for updates...
	I1213 16:12:13.887324 1542350 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 16:12:13.890274 1542350 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:12:13.893162 1542350 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 16:12:13.896033 1542350 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 16:12:13.898948 1542350 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 16:12:13.902364 1542350 config.go:182] Loaded profile config "newest-cni-526531": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:12:13.902980 1542350 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 16:12:13.935990 1542350 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 16:12:13.936167 1542350 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:12:14.000058 1542350 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:12:13.991072746 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:12:14.000167 1542350 docker.go:319] overlay module found
	I1213 16:12:14.005438 1542350 out.go:179] * Using the docker driver based on existing profile
	I1213 16:12:14.008564 1542350 start.go:309] selected driver: docker
	I1213 16:12:14.008597 1542350 start.go:927] validating driver "docker" against &{Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:12:14.008696 1542350 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 16:12:14.009457 1542350 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:12:14.067852 1542350 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:12:14.058134833 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:12:14.068237 1542350 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 16:12:14.068271 1542350 cni.go:84] Creating CNI manager for ""
	I1213 16:12:14.068329 1542350 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:12:14.068382 1542350 start.go:353] cluster config:
	{Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:12:14.071643 1542350 out.go:179] * Starting "newest-cni-526531" primary control-plane node in "newest-cni-526531" cluster
	I1213 16:12:14.074436 1542350 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 16:12:14.077449 1542350 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 16:12:14.080394 1542350 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:12:14.080442 1542350 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 16:12:14.080452 1542350 cache.go:65] Caching tarball of preloaded images
	I1213 16:12:14.080507 1542350 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 16:12:14.080564 1542350 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 16:12:14.080575 1542350 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 16:12:14.080690 1542350 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/config.json ...
	I1213 16:12:14.101187 1542350 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 16:12:14.101205 1542350 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 16:12:14.101219 1542350 cache.go:243] Successfully downloaded all kic artifacts
	I1213 16:12:14.101249 1542350 start.go:360] acquireMachinesLock for newest-cni-526531: {Name:mk8328ad899404812480e264edd6e13cbbd26230 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:12:14.101300 1542350 start.go:364] duration metric: took 35.502µs to acquireMachinesLock for "newest-cni-526531"
	I1213 16:12:14.101319 1542350 start.go:96] Skipping create...Using existing machine configuration
	I1213 16:12:14.101324 1542350 fix.go:54] fixHost starting: 
	I1213 16:12:14.101579 1542350 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:12:14.120089 1542350 fix.go:112] recreateIfNeeded on newest-cni-526531: state=Stopped err=<nil>
	W1213 16:12:14.120117 1542350 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 16:12:14.123566 1542350 out.go:252] * Restarting existing docker container for "newest-cni-526531" ...
	I1213 16:12:14.123658 1542350 cli_runner.go:164] Run: docker start newest-cni-526531
	I1213 16:12:14.407857 1542350 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:12:14.431483 1542350 kic.go:430] container "newest-cni-526531" state is running.
	I1213 16:12:14.431880 1542350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:12:14.455073 1542350 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/config.json ...
	I1213 16:12:14.455509 1542350 machine.go:94] provisionDockerMachine start ...
	I1213 16:12:14.455579 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:14.483076 1542350 main.go:143] libmachine: Using SSH client type: native
	I1213 16:12:14.483636 1542350 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34233 <nil> <nil>}
	I1213 16:12:14.483652 1542350 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 16:12:14.484350 1542350 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 16:12:17.634930 1542350 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-526531
	
	I1213 16:12:17.634954 1542350 ubuntu.go:182] provisioning hostname "newest-cni-526531"
	I1213 16:12:17.635019 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:17.654681 1542350 main.go:143] libmachine: Using SSH client type: native
	I1213 16:12:17.654996 1542350 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34233 <nil> <nil>}
	I1213 16:12:17.655008 1542350 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-526531 && echo "newest-cni-526531" | sudo tee /etc/hostname
	I1213 16:12:17.812861 1542350 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-526531
	
	I1213 16:12:17.812938 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:17.830348 1542350 main.go:143] libmachine: Using SSH client type: native
	I1213 16:12:17.830658 1542350 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34233 <nil> <nil>}
	I1213 16:12:17.830675 1542350 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-526531' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-526531/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-526531' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 16:12:17.987587 1542350 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 16:12:17.987621 1542350 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 16:12:17.987641 1542350 ubuntu.go:190] setting up certificates
	I1213 16:12:17.987659 1542350 provision.go:84] configureAuth start
	I1213 16:12:17.987726 1542350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:12:18.011145 1542350 provision.go:143] copyHostCerts
	I1213 16:12:18.011230 1542350 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 16:12:18.011240 1542350 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 16:12:18.011430 1542350 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 16:12:18.011569 1542350 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 16:12:18.011584 1542350 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 16:12:18.011623 1542350 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 16:12:18.011690 1542350 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 16:12:18.011698 1542350 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 16:12:18.011724 1542350 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 16:12:18.011776 1542350 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.newest-cni-526531 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-526531]
	I1213 16:12:18.508738 1542350 provision.go:177] copyRemoteCerts
	I1213 16:12:18.508811 1542350 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 16:12:18.508861 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:18.526422 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:18.636742 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 16:12:18.655155 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 16:12:18.674107 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 16:12:18.692128 1542350 provision.go:87] duration metric: took 704.439864ms to configureAuth
	I1213 16:12:18.692158 1542350 ubuntu.go:206] setting minikube options for container-runtime
	I1213 16:12:18.692373 1542350 config.go:182] Loaded profile config "newest-cni-526531": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:12:18.692387 1542350 machine.go:97] duration metric: took 4.236863655s to provisionDockerMachine
	I1213 16:12:18.692395 1542350 start.go:293] postStartSetup for "newest-cni-526531" (driver="docker")
	I1213 16:12:18.692409 1542350 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 16:12:18.692476 1542350 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 16:12:18.692523 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:18.710444 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:18.815900 1542350 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 16:12:18.819552 1542350 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 16:12:18.819582 1542350 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 16:12:18.819595 1542350 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 16:12:18.819651 1542350 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 16:12:18.819740 1542350 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 16:12:18.819846 1542350 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 16:12:18.827635 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:12:18.845967 1542350 start.go:296] duration metric: took 153.553828ms for postStartSetup
	I1213 16:12:18.846048 1542350 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 16:12:18.846103 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:18.863404 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:18.964333 1542350 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 16:12:18.969276 1542350 fix.go:56] duration metric: took 4.867943668s for fixHost
	I1213 16:12:18.969308 1542350 start.go:83] releasing machines lock for "newest-cni-526531", held for 4.867999692s
	I1213 16:12:18.969378 1542350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-526531
	I1213 16:12:18.986065 1542350 ssh_runner.go:195] Run: cat /version.json
	I1213 16:12:18.986168 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:18.986433 1542350 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 16:12:18.986485 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:19.008809 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:19.015681 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:19.197190 1542350 ssh_runner.go:195] Run: systemctl --version
	I1213 16:12:19.203734 1542350 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 16:12:19.208293 1542350 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 16:12:19.208365 1542350 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 16:12:19.216699 1542350 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 16:12:19.216724 1542350 start.go:496] detecting cgroup driver to use...
	I1213 16:12:19.216769 1542350 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 16:12:19.216822 1542350 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 16:12:19.235051 1542350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 16:12:19.248627 1542350 docker.go:218] disabling cri-docker service (if available) ...
	I1213 16:12:19.248695 1542350 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 16:12:19.264536 1542350 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 16:12:19.278273 1542350 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 16:12:19.415282 1542350 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 16:12:19.542944 1542350 docker.go:234] disabling docker service ...
	I1213 16:12:19.543049 1542350 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 16:12:19.558893 1542350 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 16:12:19.572698 1542350 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 16:12:19.700893 1542350 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 16:12:19.830331 1542350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 16:12:19.843617 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 16:12:19.858193 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 16:12:19.867834 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 16:12:19.877291 1542350 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 16:12:19.877362 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 16:12:19.886078 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:12:19.894812 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 16:12:19.903917 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:12:19.912720 1542350 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 16:12:19.921167 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 16:12:19.930798 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 16:12:19.940230 1542350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 16:12:19.950040 1542350 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 16:12:19.958360 1542350 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 16:12:19.966286 1542350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:12:20.089676 1542350 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 16:12:20.224467 1542350 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 16:12:20.224608 1542350 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 16:12:20.228661 1542350 start.go:564] Will wait 60s for crictl version
	I1213 16:12:20.228772 1542350 ssh_runner.go:195] Run: which crictl
	I1213 16:12:20.232454 1542350 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 16:12:20.257719 1542350 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 16:12:20.257840 1542350 ssh_runner.go:195] Run: containerd --version
	I1213 16:12:20.279500 1542350 ssh_runner.go:195] Run: containerd --version
	I1213 16:12:20.302783 1542350 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 16:12:20.305579 1542350 cli_runner.go:164] Run: docker network inspect newest-cni-526531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 16:12:20.322844 1542350 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 16:12:20.326903 1542350 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:12:20.339926 1542350 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 16:12:20.342782 1542350 kubeadm.go:884] updating cluster {Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 16:12:20.342928 1542350 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 16:12:20.343016 1542350 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:12:20.367771 1542350 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:12:20.367795 1542350 containerd.go:534] Images already preloaded, skipping extraction
	I1213 16:12:20.367857 1542350 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:12:20.393096 1542350 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:12:20.393118 1542350 cache_images.go:86] Images are preloaded, skipping loading
	I1213 16:12:20.393126 1542350 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 16:12:20.393232 1542350 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-526531 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 16:12:20.393305 1542350 ssh_runner.go:195] Run: sudo crictl info
	I1213 16:12:20.418251 1542350 cni.go:84] Creating CNI manager for ""
	I1213 16:12:20.418277 1542350 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 16:12:20.418295 1542350 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 16:12:20.418318 1542350 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-526531 NodeName:newest-cni-526531 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 16:12:20.418435 1542350 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-526531"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 16:12:20.418510 1542350 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 16:12:20.426561 1542350 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 16:12:20.426663 1542350 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 16:12:20.434234 1542350 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 16:12:20.447269 1542350 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 16:12:20.459764 1542350 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 16:12:20.473147 1542350 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 16:12:20.476975 1542350 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:12:20.486881 1542350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:12:20.634044 1542350 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:12:20.650082 1542350 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531 for IP: 192.168.76.2
	I1213 16:12:20.650107 1542350 certs.go:195] generating shared ca certs ...
	I1213 16:12:20.650125 1542350 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:12:20.650260 1542350 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 16:12:20.650315 1542350 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 16:12:20.650327 1542350 certs.go:257] generating profile certs ...
	I1213 16:12:20.650431 1542350 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/client.key
	I1213 16:12:20.650494 1542350 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key.b007e6c7
	I1213 16:12:20.650541 1542350 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key
	I1213 16:12:20.650652 1542350 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 16:12:20.650691 1542350 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 16:12:20.650704 1542350 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 16:12:20.650731 1542350 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 16:12:20.650764 1542350 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 16:12:20.650791 1542350 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 16:12:20.650844 1542350 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:12:20.651682 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 16:12:20.679737 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 16:12:20.697714 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 16:12:20.716102 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 16:12:20.734754 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 16:12:20.752380 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 16:12:20.770335 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 16:12:20.787592 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/newest-cni-526531/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1213 16:12:20.805866 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 16:12:20.823616 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 16:12:20.845606 1542350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 16:12:20.863659 1542350 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 16:12:20.877321 1542350 ssh_runner.go:195] Run: openssl version
	I1213 16:12:20.884096 1542350 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 16:12:20.891462 1542350 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 16:12:20.900719 1542350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 16:12:20.905878 1542350 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 16:12:20.905990 1542350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 16:12:20.952615 1542350 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 16:12:20.960412 1542350 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:12:20.967994 1542350 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 16:12:20.975909 1542350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:12:20.979941 1542350 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:12:20.980042 1542350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:12:21.021453 1542350 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 16:12:21.029467 1542350 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 16:12:21.037114 1542350 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 16:12:21.045054 1542350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 16:12:21.049353 1542350 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 16:12:21.049420 1542350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 16:12:21.090431 1542350 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 16:12:21.097998 1542350 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 16:12:21.101759 1542350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 16:12:21.142651 1542350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 16:12:21.183449 1542350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 16:12:21.224713 1542350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 16:12:21.267101 1542350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 16:12:21.308542 1542350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 16:12:21.350324 1542350 kubeadm.go:401] StartCluster: {Name:newest-cni-526531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-526531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:12:21.350489 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 16:12:21.350594 1542350 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 16:12:21.381089 1542350 cri.go:89] found id: ""
	I1213 16:12:21.381225 1542350 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 16:12:21.391210 1542350 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 16:12:21.391281 1542350 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 16:12:21.391387 1542350 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 16:12:21.399153 1542350 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 16:12:21.399882 1542350 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-526531" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:12:21.400209 1542350 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-1251074/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-526531" cluster setting kubeconfig missing "newest-cni-526531" context setting]
	I1213 16:12:21.400761 1542350 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:12:21.402579 1542350 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 16:12:21.410218 1542350 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1213 16:12:21.410252 1542350 kubeadm.go:602] duration metric: took 18.943347ms to restartPrimaryControlPlane
	I1213 16:12:21.410262 1542350 kubeadm.go:403] duration metric: took 59.957451ms to StartCluster
	I1213 16:12:21.410276 1542350 settings.go:142] acquiring lock: {Name:mk3e38eb9635dd950ddf8081cc0598a8c10049c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:12:21.410337 1542350 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:12:21.411206 1542350 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:12:21.411496 1542350 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 16:12:21.411842 1542350 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 16:12:21.411918 1542350 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-526531"
	I1213 16:12:21.411932 1542350 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-526531"
	I1213 16:12:21.411959 1542350 host.go:66] Checking if "newest-cni-526531" exists ...
	I1213 16:12:21.412409 1542350 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:12:21.412632 1542350 config.go:182] Loaded profile config "newest-cni-526531": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:12:21.412699 1542350 addons.go:70] Setting dashboard=true in profile "newest-cni-526531"
	I1213 16:12:21.412715 1542350 addons.go:239] Setting addon dashboard=true in "newest-cni-526531"
	W1213 16:12:21.412722 1542350 addons.go:248] addon dashboard should already be in state true
	I1213 16:12:21.412753 1542350 host.go:66] Checking if "newest-cni-526531" exists ...
	I1213 16:12:21.413150 1542350 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:12:21.417035 1542350 addons.go:70] Setting default-storageclass=true in profile "newest-cni-526531"
	I1213 16:12:21.417076 1542350 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-526531"
	I1213 16:12:21.417425 1542350 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:12:21.417785 1542350 out.go:179] * Verifying Kubernetes components...
	I1213 16:12:21.420756 1542350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:12:21.445354 1542350 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 16:12:21.448121 1542350 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:12:21.448150 1542350 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 16:12:21.448220 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:21.451677 1542350 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 16:12:21.454559 1542350 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 16:12:21.457364 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 16:12:21.457390 1542350 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 16:12:21.457468 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:21.461079 1542350 addons.go:239] Setting addon default-storageclass=true in "newest-cni-526531"
	I1213 16:12:21.461127 1542350 host.go:66] Checking if "newest-cni-526531" exists ...
	I1213 16:12:21.461533 1542350 cli_runner.go:164] Run: docker container inspect newest-cni-526531 --format={{.State.Status}}
	I1213 16:12:21.475798 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:21.512911 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:21.534060 1542350 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 16:12:21.534082 1542350 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 16:12:21.534143 1542350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-526531
	I1213 16:12:21.567579 1542350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34233 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/newest-cni-526531/id_rsa Username:docker}
	I1213 16:12:21.655778 1542350 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:12:21.660712 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:12:21.695006 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 16:12:21.695031 1542350 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 16:12:21.711844 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 16:12:21.711868 1542350 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 16:12:21.726264 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 16:12:21.726287 1542350 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 16:12:21.742159 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 16:12:21.742183 1542350 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 16:12:21.759213 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 16:12:21.759234 1542350 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 16:12:21.769713 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 16:12:21.791192 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 16:12:21.791260 1542350 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 16:12:21.814992 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 16:12:21.815063 1542350 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 16:12:21.830895 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 16:12:21.830972 1542350 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 16:12:21.849742 1542350 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:12:21.849815 1542350 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 16:12:21.864289 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:12:22.085788 1542350 api_server.go:52] waiting for apiserver process to appear ...
	I1213 16:12:22.085922 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 16:12:22.086102 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.086159 1542350 retry.go:31] will retry after 179.056392ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:12:22.086246 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.086353 1542350 retry.go:31] will retry after 181.278424ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:12:22.086609 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.086645 1542350 retry.go:31] will retry after 135.21458ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.222538 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:12:22.266024 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:12:22.268540 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:22.304395 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.304479 1542350 retry.go:31] will retry after 553.734459ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:12:22.383592 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.383626 1542350 retry.go:31] will retry after 310.627988ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:12:22.384428 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.384454 1542350 retry.go:31] will retry after 477.647599ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.586862 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:22.695343 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:22.754692 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.754771 1542350 retry.go:31] will retry after 349.01084ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.858966 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:12:22.862536 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:22.953516 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.953561 1542350 retry.go:31] will retry after 343.489775ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:12:22.953788 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:22.953849 1542350 retry.go:31] will retry after 703.913124ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.086088 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:23.104680 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:23.181935 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.181974 1542350 retry.go:31] will retry after 792.501261ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.297213 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:23.357629 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.357664 1542350 retry.go:31] will retry after 710.733017ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.586938 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:23.658890 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:12:23.729079 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.729127 1542350 retry.go:31] will retry after 642.679357ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:23.975021 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:24.036696 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.036729 1542350 retry.go:31] will retry after 1.762152539s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.068939 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 16:12:24.086560 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 16:12:24.136068 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.136100 1542350 retry.go:31] will retry after 670.883469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.372395 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:12:24.444952 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.444996 1542350 retry.go:31] will retry after 1.594344916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.586388 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:24.807252 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:24.873210 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:24.873241 1542350 retry.go:31] will retry after 1.504699438s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:25.086635 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:25.586697 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:25.799081 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:25.864095 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:25.864173 1542350 retry.go:31] will retry after 2.833515163s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:26.040555 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:12:26.086244 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 16:12:26.134589 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:26.134626 1542350 retry.go:31] will retry after 2.268954348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:26.378204 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:26.437143 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:26.437179 1542350 retry.go:31] will retry after 2.009206759s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:26.586404 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:27.086045 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:27.586074 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:28.086070 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:28.404537 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:12:28.446967 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:28.469203 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:28.469234 1542350 retry.go:31] will retry after 1.799417627s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:12:28.516574 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:28.516611 1542350 retry.go:31] will retry after 2.723803306s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:28.586847 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:28.698086 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:28.762693 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:28.762729 1542350 retry.go:31] will retry after 1.577559772s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:29.086307 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:29.586102 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:30.086078 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:30.269847 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:12:30.336710 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:30.336749 1542350 retry.go:31] will retry after 2.535864228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:30.341075 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:30.419871 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:30.419902 1542350 retry.go:31] will retry after 2.188608586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:30.586056 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:31.086792 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:31.241343 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:31.303140 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:31.303175 1542350 retry.go:31] will retry after 4.008884548s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:31.586821 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:32.086175 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:32.587018 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:32.608868 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:32.689818 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:32.689856 1542350 retry.go:31] will retry after 5.074576061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:32.873213 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:12:32.940949 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:32.940984 1542350 retry.go:31] will retry after 7.456449925s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:33.086429 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:33.586022 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:34.086094 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:34.585998 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:35.086896 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:35.312254 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:35.377660 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:35.377698 1542350 retry.go:31] will retry after 9.192453055s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:35.587034 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:36.086843 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:36.586051 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:37.086838 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:37.586771 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:37.765048 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:37.824278 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:37.824312 1542350 retry.go:31] will retry after 11.772995815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:38.086864 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:38.586073 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:39.086969 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:39.586055 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:40.086122 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:40.398539 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:12:40.468470 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:40.468513 1542350 retry.go:31] will retry after 13.248485713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:40.586656 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:41.086065 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:41.586366 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:42.086189 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:42.586086 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:43.086089 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:43.586027 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:44.086574 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:44.570741 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 16:12:44.586247 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 16:12:44.654442 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:44.654477 1542350 retry.go:31] will retry after 14.969470504s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:45.086353 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:45.586835 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:46.086082 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:46.586716 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:47.086075 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:47.586621 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:48.086124 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:48.586928 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:49.087028 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:49.586115 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:49.597980 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:12:49.660643 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:49.660672 1542350 retry.go:31] will retry after 11.077380605s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:50.086194 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:50.586148 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:51.086673 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:51.586443 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:52.086098 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:52.586095 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:53.086117 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:53.586714 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:53.717290 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:12:53.777883 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:53.777918 1542350 retry.go:31] will retry after 17.242726639s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:54.086154 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:54.586837 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:55.086738 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:55.586843 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:56.086112 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:56.586065 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:57.087033 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:57.587026 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:58.086821 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:58.586066 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:59.086344 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:59.586987 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:12:59.624396 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:12:59.692077 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:12:59.692113 1542350 retry.go:31] will retry after 25.118824905s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:00.086703 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:00.586076 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:00.738326 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:13:00.797829 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:00.797860 1542350 retry.go:31] will retry after 28.273971977s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:01.086109 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:01.586093 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:02.086800 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:02.586059 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:03.086118 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:03.586099 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:04.086574 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:04.586119 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:05.087001 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:05.586735 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:06.087021 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:06.586098 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:07.086059 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:07.586074 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:08.086071 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:08.586627 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:09.086132 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:09.586339 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:10.086956 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:10.586102 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:11.020938 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 16:13:11.086782 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 16:13:11.098002 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:11.098037 1542350 retry.go:31] will retry after 28.022573365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:11.586801 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:12.086121 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:12.586779 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:13.086780 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:13.586110 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:14.086075 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:14.586725 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:15.086688 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:15.587040 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:16.086588 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:16.586972 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:17.086881 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:17.586014 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:18.086609 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:18.586065 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:19.086985 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:19.586109 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:20.086095 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:20.586709 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:21.086130 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:21.586680 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:21.586792 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:21.614864 1542350 cri.go:89] found id: ""
	I1213 16:13:21.614885 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.614894 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:21.614901 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:21.614963 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:21.646495 1542350 cri.go:89] found id: ""
	I1213 16:13:21.646517 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.646525 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:21.646532 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:21.646592 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:21.676251 1542350 cri.go:89] found id: ""
	I1213 16:13:21.676274 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.676283 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:21.676289 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:21.676358 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:21.706048 1542350 cri.go:89] found id: ""
	I1213 16:13:21.706075 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.706084 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:21.706093 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:21.706167 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:21.733595 1542350 cri.go:89] found id: ""
	I1213 16:13:21.733620 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.733628 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:21.733634 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:21.733694 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:21.758418 1542350 cri.go:89] found id: ""
	I1213 16:13:21.758444 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.758453 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:21.758459 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:21.758520 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:21.782936 1542350 cri.go:89] found id: ""
	I1213 16:13:21.782962 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.782970 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:21.782976 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:21.783038 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:21.807262 1542350 cri.go:89] found id: ""
	I1213 16:13:21.807289 1542350 logs.go:282] 0 containers: []
	W1213 16:13:21.807298 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:21.807327 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:21.807340 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:21.862632 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:21.862670 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:21.879878 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:21.879905 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:21.954675 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:21.946473    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.946958    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.948653    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.949095    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.950567    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:21.946473    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.946958    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.948653    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.949095    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:21.950567    1845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:21.954699 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:21.954712 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:21.980443 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:21.980489 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:24.514188 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:24.524708 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:24.524788 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:24.549819 1542350 cri.go:89] found id: ""
	I1213 16:13:24.549840 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.549848 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:24.549866 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:24.549925 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:24.574754 1542350 cri.go:89] found id: ""
	I1213 16:13:24.574781 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.574790 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:24.574795 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:24.574857 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:24.606443 1542350 cri.go:89] found id: ""
	I1213 16:13:24.606465 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.606474 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:24.606481 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:24.606542 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:24.638639 1542350 cri.go:89] found id: ""
	I1213 16:13:24.638660 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.638668 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:24.638674 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:24.638733 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:24.671023 1542350 cri.go:89] found id: ""
	I1213 16:13:24.671046 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.671055 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:24.671063 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:24.671137 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:24.697378 1542350 cri.go:89] found id: ""
	I1213 16:13:24.697405 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.697414 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:24.697420 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:24.697497 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:24.722594 1542350 cri.go:89] found id: ""
	I1213 16:13:24.722621 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.722631 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:24.722637 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:24.722728 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:24.746821 1542350 cri.go:89] found id: ""
	I1213 16:13:24.746850 1542350 logs.go:282] 0 containers: []
	W1213 16:13:24.746860 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:24.746878 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:24.746891 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:24.763249 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:24.763286 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 16:13:24.811678 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:13:24.851435 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:24.840548    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.841317    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.843094    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.843728    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.845484    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:24.840548    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.841317    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.843094    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.843728    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:24.845484    1951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:24.851500 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:24.851539 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1213 16:13:24.879668 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:24.879746 1542350 retry.go:31] will retry after 33.423455906s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:24.890839 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:24.890870 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:24.920848 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:24.920877 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:27.476632 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:27.488585 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:27.488659 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:27.518011 1542350 cri.go:89] found id: ""
	I1213 16:13:27.518034 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.518042 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:27.518049 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:27.518110 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:27.543732 1542350 cri.go:89] found id: ""
	I1213 16:13:27.543759 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.543771 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:27.543777 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:27.543862 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:27.568999 1542350 cri.go:89] found id: ""
	I1213 16:13:27.569025 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.569033 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:27.569039 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:27.569097 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:27.607884 1542350 cri.go:89] found id: ""
	I1213 16:13:27.607913 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.607921 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:27.607928 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:27.607987 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:27.644349 1542350 cri.go:89] found id: ""
	I1213 16:13:27.644376 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.644384 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:27.644390 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:27.644461 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:27.676832 1542350 cri.go:89] found id: ""
	I1213 16:13:27.676860 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.676870 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:27.676875 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:27.676934 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:27.702113 1542350 cri.go:89] found id: ""
	I1213 16:13:27.702142 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.702151 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:27.702157 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:27.702219 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:27.727737 1542350 cri.go:89] found id: ""
	I1213 16:13:27.727763 1542350 logs.go:282] 0 containers: []
	W1213 16:13:27.727772 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:27.727782 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:27.727795 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:27.782283 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:27.782317 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:27.800167 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:27.800195 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:27.871267 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:27.862487    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.863167    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.864974    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.865561    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.867199    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:27.862487    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.863167    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.864974    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.865561    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:27.867199    2074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:27.871378 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:27.871398 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:27.896932 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:27.896972 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:29.072145 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 16:13:29.152200 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:29.152237 1542350 retry.go:31] will retry after 45.772066333s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:30.424283 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:30.435064 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:30.435141 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:30.458954 1542350 cri.go:89] found id: ""
	I1213 16:13:30.458977 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.458985 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:30.458991 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:30.459050 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:30.482988 1542350 cri.go:89] found id: ""
	I1213 16:13:30.483016 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.483025 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:30.483031 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:30.483089 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:30.508669 1542350 cri.go:89] found id: ""
	I1213 16:13:30.508695 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.508704 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:30.508710 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:30.508797 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:30.532450 1542350 cri.go:89] found id: ""
	I1213 16:13:30.532543 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.532561 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:30.532569 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:30.532643 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:30.561998 1542350 cri.go:89] found id: ""
	I1213 16:13:30.562026 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.562035 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:30.562041 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:30.562132 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:30.600654 1542350 cri.go:89] found id: ""
	I1213 16:13:30.600688 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.600703 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:30.600711 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:30.600824 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:30.628653 1542350 cri.go:89] found id: ""
	I1213 16:13:30.628724 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.628758 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:30.628798 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:30.628886 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:30.659930 1542350 cri.go:89] found id: ""
	I1213 16:13:30.660009 1542350 logs.go:282] 0 containers: []
	W1213 16:13:30.660032 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:30.660049 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:30.660076 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:30.717289 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:30.717327 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:30.733637 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:30.733668 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:30.804923 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:30.797049    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.797717    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.799180    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.799496    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.800891    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:30.797049    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.797717    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.799180    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.799496    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:30.800891    2193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:30.804949 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:30.804966 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:30.830439 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:30.830482 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:33.359431 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:33.370707 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:33.370778 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:33.404091 1542350 cri.go:89] found id: ""
	I1213 16:13:33.404114 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.404135 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:33.404141 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:33.404200 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:33.432896 1542350 cri.go:89] found id: ""
	I1213 16:13:33.432922 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.432931 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:33.432937 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:33.433006 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:33.457244 1542350 cri.go:89] found id: ""
	I1213 16:13:33.457271 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.457280 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:33.457285 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:33.457343 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:33.482368 1542350 cri.go:89] found id: ""
	I1213 16:13:33.482389 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.482397 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:33.482403 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:33.482463 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:33.506253 1542350 cri.go:89] found id: ""
	I1213 16:13:33.506276 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.506284 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:33.506290 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:33.506350 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:33.532337 1542350 cri.go:89] found id: ""
	I1213 16:13:33.532362 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.532371 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:33.532377 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:33.532435 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:33.557859 1542350 cri.go:89] found id: ""
	I1213 16:13:33.557887 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.557896 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:33.557902 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:33.557961 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:33.585180 1542350 cri.go:89] found id: ""
	I1213 16:13:33.585208 1542350 logs.go:282] 0 containers: []
	W1213 16:13:33.585216 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:33.585226 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:33.585249 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:33.626301 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:33.626332 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:33.693048 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:33.693086 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:33.709482 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:33.709550 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:33.779437 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:33.771529    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.771977    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.773496    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.773820    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.775408    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:33.771529    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.771977    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.773496    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.773820    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:33.775408    2320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:33.779461 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:33.779476 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:36.314080 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:36.324714 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:36.324793 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:36.352949 1542350 cri.go:89] found id: ""
	I1213 16:13:36.353025 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.353048 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:36.353066 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:36.353159 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:36.384496 1542350 cri.go:89] found id: ""
	I1213 16:13:36.384563 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.384586 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:36.384603 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:36.384690 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:36.418779 1542350 cri.go:89] found id: ""
	I1213 16:13:36.418842 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.418866 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:36.418884 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:36.418968 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:36.448378 1542350 cri.go:89] found id: ""
	I1213 16:13:36.448420 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.448429 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:36.448445 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:36.448524 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:36.473284 1542350 cri.go:89] found id: ""
	I1213 16:13:36.473361 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.473376 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:36.473383 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:36.473454 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:36.500619 1542350 cri.go:89] found id: ""
	I1213 16:13:36.500642 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.500651 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:36.500663 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:36.500724 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:36.529444 1542350 cri.go:89] found id: ""
	I1213 16:13:36.529517 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.529532 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:36.529539 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:36.529609 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:36.553861 1542350 cri.go:89] found id: ""
	I1213 16:13:36.553886 1542350 logs.go:282] 0 containers: []
	W1213 16:13:36.553894 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:36.553904 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:36.553915 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:36.610671 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:36.610704 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:36.628462 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:36.628544 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:36.705883 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:36.697914    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.698707    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.700211    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.700636    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.702077    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:36.697914    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.698707    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.700211    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.700636    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:36.702077    2421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:36.705906 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:36.705918 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:36.730607 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:36.730646 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:39.121733 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:13:39.184741 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:39.184777 1542350 retry.go:31] will retry after 19.299456104s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 16:13:39.259892 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:39.271332 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:39.271403 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:39.300612 1542350 cri.go:89] found id: ""
	I1213 16:13:39.300637 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.300646 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:39.300652 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:39.300712 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:39.324641 1542350 cri.go:89] found id: ""
	I1213 16:13:39.324666 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.324675 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:39.324680 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:39.324739 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:39.356074 1542350 cri.go:89] found id: ""
	I1213 16:13:39.356099 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.356108 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:39.356114 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:39.356178 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:39.383742 1542350 cri.go:89] found id: ""
	I1213 16:13:39.383766 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.383775 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:39.383781 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:39.383846 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:39.411271 1542350 cri.go:89] found id: ""
	I1213 16:13:39.411297 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.411305 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:39.411334 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:39.411395 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:39.437295 1542350 cri.go:89] found id: ""
	I1213 16:13:39.437321 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.437329 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:39.437336 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:39.437419 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:39.462328 1542350 cri.go:89] found id: ""
	I1213 16:13:39.462352 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.462361 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:39.462368 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:39.462445 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:39.486926 1542350 cri.go:89] found id: ""
	I1213 16:13:39.486951 1542350 logs.go:282] 0 containers: []
	W1213 16:13:39.486961 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:39.486970 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:39.486986 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:39.545864 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:39.545902 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:39.561750 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:39.561780 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:39.648853 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:39.635798    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.636607    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.637647    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.638390    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.642907    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:39.635798    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.636607    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.637647    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.638390    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:39.642907    2540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:39.648878 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:39.648893 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:39.674238 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:39.674280 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:42.203005 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:42.217190 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:42.217290 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:42.248179 1542350 cri.go:89] found id: ""
	I1213 16:13:42.248214 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.248224 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:42.248231 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:42.248315 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:42.281373 1542350 cri.go:89] found id: ""
	I1213 16:13:42.281400 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.281409 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:42.281416 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:42.281481 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:42.313298 1542350 cri.go:89] found id: ""
	I1213 16:13:42.313327 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.313343 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:42.313351 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:42.313419 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:42.347164 1542350 cri.go:89] found id: ""
	I1213 16:13:42.347256 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.347274 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:42.347282 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:42.347421 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:42.377063 1542350 cri.go:89] found id: ""
	I1213 16:13:42.377097 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.377105 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:42.377112 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:42.377195 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:42.404395 1542350 cri.go:89] found id: ""
	I1213 16:13:42.404430 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.404439 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:42.404446 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:42.404522 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:42.429038 1542350 cri.go:89] found id: ""
	I1213 16:13:42.429112 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.429128 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:42.429135 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:42.429202 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:42.453891 1542350 cri.go:89] found id: ""
	I1213 16:13:42.453935 1542350 logs.go:282] 0 containers: []
	W1213 16:13:42.453944 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:42.453954 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:42.453970 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:42.509865 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:42.509901 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:42.525994 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:42.526022 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:42.601177 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:42.590564    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.592469    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.593053    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.594708    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.595182    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:42.590564    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.592469    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.593053    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.594708    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:42.595182    2655 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:42.601257 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:42.601292 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:42.630417 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:42.630495 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:45.167780 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:45.186685 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:45.186786 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:45.266905 1542350 cri.go:89] found id: ""
	I1213 16:13:45.266931 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.266941 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:45.266948 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:45.267020 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:45.302244 1542350 cri.go:89] found id: ""
	I1213 16:13:45.302273 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.302283 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:45.302289 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:45.302368 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:45.330669 1542350 cri.go:89] found id: ""
	I1213 16:13:45.330697 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.330707 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:45.330713 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:45.330777 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:45.368642 1542350 cri.go:89] found id: ""
	I1213 16:13:45.368677 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.368685 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:45.368692 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:45.368753 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:45.407608 1542350 cri.go:89] found id: ""
	I1213 16:13:45.407631 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.407639 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:45.407645 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:45.407706 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:45.438077 1542350 cri.go:89] found id: ""
	I1213 16:13:45.438104 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.438112 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:45.438119 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:45.438178 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:45.467617 1542350 cri.go:89] found id: ""
	I1213 16:13:45.467645 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.467654 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:45.467660 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:45.467725 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:45.496715 1542350 cri.go:89] found id: ""
	I1213 16:13:45.496741 1542350 logs.go:282] 0 containers: []
	W1213 16:13:45.496750 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:45.496760 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:45.496771 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:45.522438 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:45.522475 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:45.554662 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:45.554691 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:45.614193 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:45.614275 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:45.631794 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:45.631875 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:45.701179 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:45.692225    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.692953    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.694656    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.695386    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.697149    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:45.692225    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.692953    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.694656    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.695386    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:45.697149    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:48.201848 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:48.212860 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:48.212934 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:48.241802 1542350 cri.go:89] found id: ""
	I1213 16:13:48.241830 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.241838 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:48.241845 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:48.241908 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:48.270100 1542350 cri.go:89] found id: ""
	I1213 16:13:48.270128 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.270137 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:48.270143 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:48.270207 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:48.295048 1542350 cri.go:89] found id: ""
	I1213 16:13:48.295073 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.295081 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:48.295087 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:48.295150 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:48.320949 1542350 cri.go:89] found id: ""
	I1213 16:13:48.320974 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.320983 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:48.320989 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:48.321048 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:48.357548 1542350 cri.go:89] found id: ""
	I1213 16:13:48.357572 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.357580 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:48.357586 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:48.357646 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:48.395642 1542350 cri.go:89] found id: ""
	I1213 16:13:48.395676 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.395685 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:48.395692 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:48.395761 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:48.426584 1542350 cri.go:89] found id: ""
	I1213 16:13:48.426611 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.426620 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:48.426626 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:48.426687 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:48.451854 1542350 cri.go:89] found id: ""
	I1213 16:13:48.451890 1542350 logs.go:282] 0 containers: []
	W1213 16:13:48.451899 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:48.451923 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:48.451938 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:48.508044 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:48.508086 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:48.523941 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:48.523971 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:48.594870 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:48.579201    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.579927    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.583793    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.584114    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.585609    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:48.579201    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.579927    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.583793    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.584114    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:48.585609    2886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:48.594893 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:48.594906 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:48.621999 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:48.622078 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:51.156024 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:51.167178 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:51.167252 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:51.198661 1542350 cri.go:89] found id: ""
	I1213 16:13:51.198684 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.198692 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:51.198699 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:51.198757 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:51.224046 1542350 cri.go:89] found id: ""
	I1213 16:13:51.224069 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.224077 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:51.224083 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:51.224149 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:51.253035 1542350 cri.go:89] found id: ""
	I1213 16:13:51.253062 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.253070 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:51.253076 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:51.253164 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:51.278917 1542350 cri.go:89] found id: ""
	I1213 16:13:51.278943 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.278952 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:51.278958 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:51.279016 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:51.305382 1542350 cri.go:89] found id: ""
	I1213 16:13:51.305405 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.305413 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:51.305419 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:51.305480 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:51.329703 1542350 cri.go:89] found id: ""
	I1213 16:13:51.329726 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.329735 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:51.329741 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:51.329800 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:51.359740 1542350 cri.go:89] found id: ""
	I1213 16:13:51.359762 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.359770 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:51.359776 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:51.359840 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:51.386446 1542350 cri.go:89] found id: ""
	I1213 16:13:51.386522 1542350 logs.go:282] 0 containers: []
	W1213 16:13:51.386544 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:51.386566 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:51.386589 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:51.412669 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:51.412707 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:51.453745 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:51.453775 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:51.511660 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:51.511698 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:51.527994 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:51.528025 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:51.595021 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:51.583262    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.584038    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.585894    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.586556    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.588263    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:51.583262    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.584038    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.585894    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.586556    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:51.588263    3012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:54.096158 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:54.107425 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:54.107512 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:54.138865 1542350 cri.go:89] found id: ""
	I1213 16:13:54.138891 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.138899 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:54.138905 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:54.138966 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:54.164096 1542350 cri.go:89] found id: ""
	I1213 16:13:54.164121 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.164130 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:54.164135 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:54.164195 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:54.193309 1542350 cri.go:89] found id: ""
	I1213 16:13:54.193335 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.193345 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:54.193352 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:54.193416 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:54.219468 1542350 cri.go:89] found id: ""
	I1213 16:13:54.219490 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.219499 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:54.219520 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:54.219589 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:54.244935 1542350 cri.go:89] found id: ""
	I1213 16:13:54.244962 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.244971 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:54.244977 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:54.245038 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:54.274445 1542350 cri.go:89] found id: ""
	I1213 16:13:54.274472 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.274481 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:54.274488 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:54.274554 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:54.304121 1542350 cri.go:89] found id: ""
	I1213 16:13:54.304146 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.304154 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:54.304160 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:54.304217 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:54.329301 1542350 cri.go:89] found id: ""
	I1213 16:13:54.329327 1542350 logs.go:282] 0 containers: []
	W1213 16:13:54.329335 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:54.329350 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:54.329362 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:54.357962 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:54.358003 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:54.393726 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:54.393753 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:54.454879 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:54.454917 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:54.471046 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:54.471122 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:54.539675 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:54.530749    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.531242    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.532726    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.533202    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.535706    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:54.530749    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.531242    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.532726    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.533202    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:54.535706    3128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:57.040543 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:57.051825 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:57.051902 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:13:57.080948 1542350 cri.go:89] found id: ""
	I1213 16:13:57.080975 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.080984 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:13:57.080990 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:13:57.081060 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:13:57.106564 1542350 cri.go:89] found id: ""
	I1213 16:13:57.106592 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.106602 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:13:57.106609 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:13:57.106674 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:13:57.132305 1542350 cri.go:89] found id: ""
	I1213 16:13:57.132332 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.132341 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:13:57.132347 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:13:57.132415 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:13:57.161893 1542350 cri.go:89] found id: ""
	I1213 16:13:57.161919 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.161928 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:13:57.161934 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:13:57.161996 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:13:57.187018 1542350 cri.go:89] found id: ""
	I1213 16:13:57.187042 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.187051 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:13:57.187057 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:13:57.187118 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:13:57.213450 1542350 cri.go:89] found id: ""
	I1213 16:13:57.213477 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.213486 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:13:57.213493 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:13:57.213598 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:13:57.239773 1542350 cri.go:89] found id: ""
	I1213 16:13:57.239799 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.239808 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:13:57.239814 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:13:57.239875 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:13:57.268874 1542350 cri.go:89] found id: ""
	I1213 16:13:57.268901 1542350 logs.go:282] 0 containers: []
	W1213 16:13:57.268910 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:13:57.268920 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:13:57.268932 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:13:57.325438 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:13:57.325478 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:13:57.345255 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:13:57.345288 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:13:57.419796 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:13:57.411216    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.412003    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.413617    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.414124    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.415814    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:13:57.411216    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.412003    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.413617    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.414124    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:13:57.415814    3228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:13:57.419818 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:13:57.419830 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:13:57.445711 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:13:57.445753 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:13:58.303454 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 16:13:58.370450 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:13:58.370563 1542350 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 16:13:58.485061 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 16:13:58.547882 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:13:58.547990 1542350 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 16:13:59.973778 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:13:59.984749 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:13:59.984822 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:00.047691 1542350 cri.go:89] found id: ""
	I1213 16:14:00.047719 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.047729 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:00.047735 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:00.047812 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:00.172004 1542350 cri.go:89] found id: ""
	I1213 16:14:00.172032 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.172042 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:00.172048 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:00.172124 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:00.225264 1542350 cri.go:89] found id: ""
	I1213 16:14:00.225417 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.225430 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:00.225441 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:00.225515 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:00.291798 1542350 cri.go:89] found id: ""
	I1213 16:14:00.291826 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.291837 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:00.291843 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:00.291915 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:00.322720 1542350 cri.go:89] found id: ""
	I1213 16:14:00.322775 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.322785 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:00.322802 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:00.322965 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:00.382229 1542350 cri.go:89] found id: ""
	I1213 16:14:00.382259 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.382268 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:00.382276 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:00.382353 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:00.428076 1542350 cri.go:89] found id: ""
	I1213 16:14:00.428104 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.428114 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:00.428122 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:00.428188 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:00.456283 1542350 cri.go:89] found id: ""
	I1213 16:14:00.456313 1542350 logs.go:282] 0 containers: []
	W1213 16:14:00.456322 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:00.456334 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:00.456347 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:00.487074 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:00.487103 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:00.543060 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:00.543096 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:00.559570 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:00.559599 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:00.643362 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:00.632447    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.633406    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.637299    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.637614    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.639111    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:00.632447    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.633406    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.637299    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.637614    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:00.639111    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:00.643385 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:00.643398 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:03.169712 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:03.180422 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:03.180498 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:03.204986 1542350 cri.go:89] found id: ""
	I1213 16:14:03.205052 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.205078 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:03.205091 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:03.205167 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:03.229548 1542350 cri.go:89] found id: ""
	I1213 16:14:03.229624 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.229648 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:03.229667 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:03.229759 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:03.255379 1542350 cri.go:89] found id: ""
	I1213 16:14:03.255401 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.255410 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:03.255415 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:03.255474 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:03.281492 1542350 cri.go:89] found id: ""
	I1213 16:14:03.281516 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.281526 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:03.281532 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:03.281594 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:03.309687 1542350 cri.go:89] found id: ""
	I1213 16:14:03.309709 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.309717 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:03.309723 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:03.309781 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:03.342064 1542350 cri.go:89] found id: ""
	I1213 16:14:03.342088 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.342097 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:03.342104 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:03.342166 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:03.374355 1542350 cri.go:89] found id: ""
	I1213 16:14:03.374427 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.374449 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:03.374468 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:03.374551 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:03.402300 1542350 cri.go:89] found id: ""
	I1213 16:14:03.402373 1542350 logs.go:282] 0 containers: []
	W1213 16:14:03.402397 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:03.402419 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:03.402454 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:03.419291 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:03.419341 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:03.488415 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:03.479265    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.480042    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.481961    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.482530    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.483778    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:03.479265    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.480042    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.481961    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.482530    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:03.483778    3460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:03.488438 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:03.488450 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:03.513548 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:03.513583 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:03.541410 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:03.541438 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:06.098537 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:06.109444 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:06.109517 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:06.135738 1542350 cri.go:89] found id: ""
	I1213 16:14:06.135763 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.135772 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:06.135778 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:06.135838 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:06.164881 1542350 cri.go:89] found id: ""
	I1213 16:14:06.164907 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.164915 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:06.164921 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:06.165006 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:06.190132 1542350 cri.go:89] found id: ""
	I1213 16:14:06.190157 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.190166 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:06.190172 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:06.190237 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:06.214554 1542350 cri.go:89] found id: ""
	I1213 16:14:06.214588 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.214603 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:06.214610 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:06.214678 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:06.239546 1542350 cri.go:89] found id: ""
	I1213 16:14:06.239573 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.239582 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:06.239588 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:06.239675 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:06.265195 1542350 cri.go:89] found id: ""
	I1213 16:14:06.265223 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.265231 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:06.265237 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:06.265308 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:06.289926 1542350 cri.go:89] found id: ""
	I1213 16:14:06.289960 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.289969 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:06.289991 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:06.290071 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:06.314603 1542350 cri.go:89] found id: ""
	I1213 16:14:06.314629 1542350 logs.go:282] 0 containers: []
	W1213 16:14:06.314637 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:06.314647 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:06.314683 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:06.371177 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:06.371258 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:06.393856 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:06.393930 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:06.459001 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:06.450168    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.450823    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.452646    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.453140    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.454779    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:06.450168    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.450823    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.452646    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.453140    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:06.454779    3575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:06.459025 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:06.459038 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:06.484151 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:06.484188 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:09.017168 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:09.028196 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:09.028273 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:09.056958 1542350 cri.go:89] found id: ""
	I1213 16:14:09.056983 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.056991 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:09.056997 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:09.057056 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:09.081528 1542350 cri.go:89] found id: ""
	I1213 16:14:09.081554 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.081562 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:09.081568 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:09.081625 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:09.106979 1542350 cri.go:89] found id: ""
	I1213 16:14:09.107006 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.107015 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:09.107022 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:09.107082 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:09.131992 1542350 cri.go:89] found id: ""
	I1213 16:14:09.132014 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.132022 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:09.132031 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:09.132090 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:09.159379 1542350 cri.go:89] found id: ""
	I1213 16:14:09.159403 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.159411 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:09.159417 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:09.159475 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:09.188125 1542350 cri.go:89] found id: ""
	I1213 16:14:09.188148 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.188157 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:09.188163 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:09.188223 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:09.213724 1542350 cri.go:89] found id: ""
	I1213 16:14:09.213746 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.213755 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:09.213762 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:09.213820 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:09.239228 1542350 cri.go:89] found id: ""
	I1213 16:14:09.239250 1542350 logs.go:282] 0 containers: []
	W1213 16:14:09.239258 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:09.239269 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:09.239280 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:09.264873 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:09.264908 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:09.297705 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:09.297733 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:09.356080 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:09.356130 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:09.376099 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:09.376130 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:09.447156 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:09.438767    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.439139    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.440687    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.441254    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.442932    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:09.438767    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.439139    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.440687    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.441254    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:09.442932    3701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:11.948214 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:11.961565 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:11.961686 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:11.989927 1542350 cri.go:89] found id: ""
	I1213 16:14:11.989978 1542350 logs.go:282] 0 containers: []
	W1213 16:14:11.989988 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:11.989997 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:11.990074 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:12.015827 1542350 cri.go:89] found id: ""
	I1213 16:14:12.015853 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.015863 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:12.015869 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:12.015931 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:12.043024 1542350 cri.go:89] found id: ""
	I1213 16:14:12.043052 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.043061 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:12.043067 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:12.043129 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:12.068348 1542350 cri.go:89] found id: ""
	I1213 16:14:12.068376 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.068385 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:12.068390 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:12.068450 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:12.097740 1542350 cri.go:89] found id: ""
	I1213 16:14:12.097774 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.097783 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:12.097790 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:12.097858 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:12.121723 1542350 cri.go:89] found id: ""
	I1213 16:14:12.121755 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.121764 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:12.121770 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:12.121842 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:12.150786 1542350 cri.go:89] found id: ""
	I1213 16:14:12.150813 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.150821 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:12.150827 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:12.150892 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:12.175342 1542350 cri.go:89] found id: ""
	I1213 16:14:12.175367 1542350 logs.go:282] 0 containers: []
	W1213 16:14:12.175376 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:12.175386 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:12.175404 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:12.231019 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:12.231066 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:12.247225 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:12.247257 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:12.311535 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:12.303778    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.304209    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.305700    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.306034    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.307505    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:12.303778    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.304209    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.305700    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.306034    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:12.307505    3796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:12.311562 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:12.311575 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:12.336385 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:12.336419 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:14.871456 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:14.883637 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:14.883706 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:14.912506 1542350 cri.go:89] found id: ""
	I1213 16:14:14.912530 1542350 logs.go:282] 0 containers: []
	W1213 16:14:14.912539 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:14.912545 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:14.912612 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:14.924965 1542350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:14:14.948875 1542350 cri.go:89] found id: ""
	I1213 16:14:14.948908 1542350 logs.go:282] 0 containers: []
	W1213 16:14:14.948917 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:14.948923 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:14.948983 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	W1213 16:14:15.004427 1542350 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 16:14:15.004545 1542350 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 16:14:15.004879 1542350 cri.go:89] found id: ""
	I1213 16:14:15.004917 1542350 logs.go:282] 0 containers: []
	W1213 16:14:15.005050 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:15.005059 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:15.005129 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:15.016719 1542350 out.go:179] * Enabled addons: 
	I1213 16:14:15.019727 1542350 addons.go:530] duration metric: took 1m53.607875831s for enable addons: enabled=[]
	I1213 16:14:15.061323 1542350 cri.go:89] found id: ""
	I1213 16:14:15.061351 1542350 logs.go:282] 0 containers: []
	W1213 16:14:15.061359 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:15.061366 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:15.061431 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:15.089262 1542350 cri.go:89] found id: ""
	I1213 16:14:15.089290 1542350 logs.go:282] 0 containers: []
	W1213 16:14:15.089310 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:15.089351 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:15.089416 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:15.114964 1542350 cri.go:89] found id: ""
	I1213 16:14:15.114992 1542350 logs.go:282] 0 containers: []
	W1213 16:14:15.115001 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:15.115010 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:15.115087 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:15.150205 1542350 cri.go:89] found id: ""
	I1213 16:14:15.150228 1542350 logs.go:282] 0 containers: []
	W1213 16:14:15.150237 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:15.150243 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:15.150305 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:15.179096 1542350 cri.go:89] found id: ""
	I1213 16:14:15.179124 1542350 logs.go:282] 0 containers: []
	W1213 16:14:15.179159 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:15.179170 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:15.179186 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:15.240671 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:15.240716 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:15.257989 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:15.258020 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:15.327105 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:15.316562    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.317280    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.321065    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.321732    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.323049    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:15.316562    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.317280    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.321065    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.321732    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:15.323049    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:15.327125 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:15.327139 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:15.356556 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:15.356601 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:17.895435 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:17.906103 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:17.906178 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:17.934229 1542350 cri.go:89] found id: ""
	I1213 16:14:17.934255 1542350 logs.go:282] 0 containers: []
	W1213 16:14:17.934263 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:17.934270 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:17.934329 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:17.960923 1542350 cri.go:89] found id: ""
	I1213 16:14:17.960947 1542350 logs.go:282] 0 containers: []
	W1213 16:14:17.960955 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:17.960980 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:17.961039 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:17.986062 1542350 cri.go:89] found id: ""
	I1213 16:14:17.986096 1542350 logs.go:282] 0 containers: []
	W1213 16:14:17.986105 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:17.986111 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:17.986180 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:18.019636 1542350 cri.go:89] found id: ""
	I1213 16:14:18.019718 1542350 logs.go:282] 0 containers: []
	W1213 16:14:18.019741 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:18.019761 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:18.019858 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:18.046719 1542350 cri.go:89] found id: ""
	I1213 16:14:18.046787 1542350 logs.go:282] 0 containers: []
	W1213 16:14:18.046810 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:18.046829 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:18.046924 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:18.073562 1542350 cri.go:89] found id: ""
	I1213 16:14:18.073641 1542350 logs.go:282] 0 containers: []
	W1213 16:14:18.073665 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:18.073685 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:18.073763 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:18.100968 1542350 cri.go:89] found id: ""
	I1213 16:14:18.101005 1542350 logs.go:282] 0 containers: []
	W1213 16:14:18.101014 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:18.101021 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:18.101086 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:18.127366 1542350 cri.go:89] found id: ""
	I1213 16:14:18.127391 1542350 logs.go:282] 0 containers: []
	W1213 16:14:18.127401 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:18.127410 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:18.127422 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:18.160263 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:18.160289 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:18.217033 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:18.217066 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:18.234115 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:18.234146 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:18.301091 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:18.292902    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.293484    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.295231    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.295655    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.297297    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:18.292902    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.293484    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.295231    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.295655    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:18.297297    4038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:18.301112 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:18.301126 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:20.828738 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:20.843249 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:20.843356 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:20.878301 1542350 cri.go:89] found id: ""
	I1213 16:14:20.878326 1542350 logs.go:282] 0 containers: []
	W1213 16:14:20.878335 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:20.878341 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:20.878400 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:20.911841 1542350 cri.go:89] found id: ""
	I1213 16:14:20.911863 1542350 logs.go:282] 0 containers: []
	W1213 16:14:20.911872 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:20.911877 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:20.911937 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:20.938802 1542350 cri.go:89] found id: ""
	I1213 16:14:20.938825 1542350 logs.go:282] 0 containers: []
	W1213 16:14:20.938833 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:20.938839 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:20.938895 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:20.963358 1542350 cri.go:89] found id: ""
	I1213 16:14:20.963382 1542350 logs.go:282] 0 containers: []
	W1213 16:14:20.963395 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:20.963402 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:20.963462 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:20.988428 1542350 cri.go:89] found id: ""
	I1213 16:14:20.988500 1542350 logs.go:282] 0 containers: []
	W1213 16:14:20.988516 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:20.988523 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:20.988586 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:21.015053 1542350 cri.go:89] found id: ""
	I1213 16:14:21.015088 1542350 logs.go:282] 0 containers: []
	W1213 16:14:21.015097 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:21.015104 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:21.015168 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:21.041720 1542350 cri.go:89] found id: ""
	I1213 16:14:21.041747 1542350 logs.go:282] 0 containers: []
	W1213 16:14:21.041761 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:21.041767 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:21.041844 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:21.066333 1542350 cri.go:89] found id: ""
	I1213 16:14:21.066358 1542350 logs.go:282] 0 containers: []
	W1213 16:14:21.066367 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:21.066376 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:21.066390 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:21.092074 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:21.092113 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:21.119921 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:21.119949 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:21.175737 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:21.175772 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:21.192772 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:21.192802 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:21.258320 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:21.248712    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.249237    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.250941    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.251593    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.253749    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:21.248712    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.249237    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.250941    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.251593    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:21.253749    4153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:23.760202 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:23.770818 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:23.770889 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:23.797015 1542350 cri.go:89] found id: ""
	I1213 16:14:23.797038 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.797047 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:23.797053 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:23.797113 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:23.822062 1542350 cri.go:89] found id: ""
	I1213 16:14:23.822085 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.822093 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:23.822100 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:23.822158 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:23.874192 1542350 cri.go:89] found id: ""
	I1213 16:14:23.874214 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.874223 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:23.874229 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:23.874286 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:23.900200 1542350 cri.go:89] found id: ""
	I1213 16:14:23.900221 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.900230 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:23.900236 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:23.900296 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:23.926269 1542350 cri.go:89] found id: ""
	I1213 16:14:23.926298 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.926306 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:23.926313 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:23.926373 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:23.953863 1542350 cri.go:89] found id: ""
	I1213 16:14:23.953893 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.953902 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:23.953909 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:23.953978 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:23.978285 1542350 cri.go:89] found id: ""
	I1213 16:14:23.978314 1542350 logs.go:282] 0 containers: []
	W1213 16:14:23.978323 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:23.978332 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:23.978392 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:24.004367 1542350 cri.go:89] found id: ""
	I1213 16:14:24.004397 1542350 logs.go:282] 0 containers: []
	W1213 16:14:24.004407 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:24.004418 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:24.004433 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:24.038684 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:24.038715 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:24.093699 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:24.093736 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:24.109888 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:24.109958 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:24.176373 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:24.167217    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.168183    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.169904    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.170492    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.172210    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:24.167217    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.168183    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.169904    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.170492    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:24.172210    4263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:24.176410 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:24.176423 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:26.703702 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:26.715414 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:26.715505 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:26.741617 1542350 cri.go:89] found id: ""
	I1213 16:14:26.741644 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.741653 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:26.741660 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:26.741725 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:26.773142 1542350 cri.go:89] found id: ""
	I1213 16:14:26.773166 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.773175 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:26.773180 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:26.773248 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:26.800698 1542350 cri.go:89] found id: ""
	I1213 16:14:26.800770 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.800792 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:26.800812 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:26.800916 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:26.826188 1542350 cri.go:89] found id: ""
	I1213 16:14:26.826213 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.826222 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:26.826228 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:26.826290 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:26.858537 1542350 cri.go:89] found id: ""
	I1213 16:14:26.858564 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.858573 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:26.858579 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:26.858644 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:26.893373 1542350 cri.go:89] found id: ""
	I1213 16:14:26.893401 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.893411 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:26.893417 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:26.893491 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:26.924977 1542350 cri.go:89] found id: ""
	I1213 16:14:26.925004 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.925013 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:26.925019 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:26.925080 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:26.949933 1542350 cri.go:89] found id: ""
	I1213 16:14:26.949962 1542350 logs.go:282] 0 containers: []
	W1213 16:14:26.949971 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:26.949980 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:26.949997 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:26.980349 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:26.980380 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:27.038924 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:27.038960 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:27.055463 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:27.055494 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:27.125589 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:27.116928    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.117440    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.119173    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.119547    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.121058    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:27.116928    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.117440    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.119173    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.119547    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:27.121058    4371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:27.125608 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:27.125624 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:29.652560 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:29.663991 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:29.664080 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:29.692800 1542350 cri.go:89] found id: ""
	I1213 16:14:29.692825 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.692834 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:29.692841 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:29.692908 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:29.724553 1542350 cri.go:89] found id: ""
	I1213 16:14:29.724585 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.724595 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:29.724603 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:29.724665 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:29.750391 1542350 cri.go:89] found id: ""
	I1213 16:14:29.750460 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.750484 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:29.750502 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:29.750593 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:29.774900 1542350 cri.go:89] found id: ""
	I1213 16:14:29.774968 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.774994 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:29.775012 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:29.775104 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:29.800460 1542350 cri.go:89] found id: ""
	I1213 16:14:29.800503 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.800512 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:29.800518 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:29.800581 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:29.825184 1542350 cri.go:89] found id: ""
	I1213 16:14:29.825261 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.825285 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:29.825305 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:29.825391 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:29.857574 1542350 cri.go:89] found id: ""
	I1213 16:14:29.857604 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.857613 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:29.857619 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:29.857681 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:29.886573 1542350 cri.go:89] found id: ""
	I1213 16:14:29.886602 1542350 logs.go:282] 0 containers: []
	W1213 16:14:29.886610 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:29.886620 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:29.886636 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:29.954547 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:29.945729    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.946349    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.948212    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.948764    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.950488    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:29.945729    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.946349    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.948212    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.948764    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:29.950488    4466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:29.954614 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:29.954636 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:29.980281 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:29.980318 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:30.020553 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:30.020640 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:30.112248 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:30.112288 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:32.632543 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:32.644615 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:32.644739 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:32.671076 1542350 cri.go:89] found id: ""
	I1213 16:14:32.671103 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.671115 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:32.671124 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:32.671204 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:32.705219 1542350 cri.go:89] found id: ""
	I1213 16:14:32.705245 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.705255 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:32.705264 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:32.705345 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:32.734663 1542350 cri.go:89] found id: ""
	I1213 16:14:32.734764 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.734796 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:32.734826 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:32.734911 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:32.763416 1542350 cri.go:89] found id: ""
	I1213 16:14:32.763441 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.763451 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:32.763457 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:32.763519 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:32.790404 1542350 cri.go:89] found id: ""
	I1213 16:14:32.790478 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.790500 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:32.790519 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:32.790638 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:32.818613 1542350 cri.go:89] found id: ""
	I1213 16:14:32.818699 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.818735 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:32.818773 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:32.818908 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:32.850999 1542350 cri.go:89] found id: ""
	I1213 16:14:32.851029 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.851038 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:32.851050 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:32.851113 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:32.883800 1542350 cri.go:89] found id: ""
	I1213 16:14:32.883828 1542350 logs.go:282] 0 containers: []
	W1213 16:14:32.883837 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:32.883846 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:32.883857 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:32.950061 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:32.950111 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:32.967586 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:32.967617 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:33.038320 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:33.029200    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.030090    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.031863    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.032322    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.033913    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:33.029200    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.030090    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.031863    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.032322    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:33.033913    4586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:33.038342 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:33.038357 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:33.066098 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:33.066154 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:35.607481 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:35.619526 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:35.619589 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:35.646097 1542350 cri.go:89] found id: ""
	I1213 16:14:35.646120 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.646131 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:35.646137 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:35.646197 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:35.671288 1542350 cri.go:89] found id: ""
	I1213 16:14:35.671349 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.671358 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:35.671364 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:35.671428 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:35.696891 1542350 cri.go:89] found id: ""
	I1213 16:14:35.696915 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.696923 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:35.696930 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:35.696990 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:35.722027 1542350 cri.go:89] found id: ""
	I1213 16:14:35.722049 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.722057 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:35.722063 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:35.722120 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:35.746428 1542350 cri.go:89] found id: ""
	I1213 16:14:35.746450 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.746458 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:35.746465 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:35.746521 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:35.771433 1542350 cri.go:89] found id: ""
	I1213 16:14:35.771456 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.771465 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:35.771471 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:35.771527 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:35.795226 1542350 cri.go:89] found id: ""
	I1213 16:14:35.795292 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.795408 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:35.795422 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:35.795494 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:35.819205 1542350 cri.go:89] found id: ""
	I1213 16:14:35.819237 1542350 logs.go:282] 0 containers: []
	W1213 16:14:35.819246 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:35.819256 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:35.819268 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:35.856667 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:35.856698 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:35.921282 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:35.921317 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:35.937351 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:35.937379 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:36.013024 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:35.998154    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:35.998523    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:36.000027    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:36.000325    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:36.001562    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:35.998154    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:35.998523    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:36.000027    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:36.000325    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:36.001562    4708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:36.013050 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:36.013065 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:38.540010 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:38.553894 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:38.553969 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:38.587080 1542350 cri.go:89] found id: ""
	I1213 16:14:38.587102 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.587110 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:38.587116 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:38.587180 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:38.615796 1542350 cri.go:89] found id: ""
	I1213 16:14:38.615820 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.615829 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:38.615835 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:38.615895 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:38.652609 1542350 cri.go:89] found id: ""
	I1213 16:14:38.652634 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.652643 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:38.652649 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:38.652706 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:38.681712 1542350 cri.go:89] found id: ""
	I1213 16:14:38.681738 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.681747 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:38.681753 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:38.681812 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:38.707047 1542350 cri.go:89] found id: ""
	I1213 16:14:38.707076 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.707085 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:38.707091 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:38.707154 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:38.731834 1542350 cri.go:89] found id: ""
	I1213 16:14:38.731868 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.731878 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:38.731884 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:38.731951 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:38.755752 1542350 cri.go:89] found id: ""
	I1213 16:14:38.755816 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.755838 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:38.755855 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:38.755940 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:38.780290 1542350 cri.go:89] found id: ""
	I1213 16:14:38.780316 1542350 logs.go:282] 0 containers: []
	W1213 16:14:38.780325 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:38.780335 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:38.780354 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:38.837581 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:38.837613 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:38.855100 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:38.855130 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:38.927088 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:38.917976    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.918723    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.920509    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.921087    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.922821    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:38.917976    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.918723    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.920509    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.921087    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:38.922821    4810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:38.927155 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:38.927178 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:38.952089 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:38.952127 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:41.483644 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:41.494493 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:41.494574 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:41.518966 1542350 cri.go:89] found id: ""
	I1213 16:14:41.518988 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.518996 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:41.519002 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:41.519066 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:41.545695 1542350 cri.go:89] found id: ""
	I1213 16:14:41.545720 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.545729 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:41.545734 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:41.545798 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:41.571565 1542350 cri.go:89] found id: ""
	I1213 16:14:41.571591 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.571600 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:41.571606 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:41.571673 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:41.619450 1542350 cri.go:89] found id: ""
	I1213 16:14:41.619473 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.619482 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:41.619488 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:41.619548 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:41.653736 1542350 cri.go:89] found id: ""
	I1213 16:14:41.653757 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.653766 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:41.653773 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:41.653835 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:41.682235 1542350 cri.go:89] found id: ""
	I1213 16:14:41.682257 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.682265 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:41.682272 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:41.682332 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:41.708453 1542350 cri.go:89] found id: ""
	I1213 16:14:41.708475 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.708489 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:41.708496 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:41.708554 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:41.737148 1542350 cri.go:89] found id: ""
	I1213 16:14:41.737171 1542350 logs.go:282] 0 containers: []
	W1213 16:14:41.737179 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:41.737193 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:41.737205 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:41.792082 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:41.792120 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:41.808566 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:41.808597 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:41.888202 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:41.877628    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.878620    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.880407    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.881012    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.883935    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:41.877628    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.878620    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.880407    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.881012    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:41.883935    4922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:41.888226 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:41.888238 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:41.913429 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:41.913466 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:44.445881 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:44.456550 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:44.456627 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:44.482008 1542350 cri.go:89] found id: ""
	I1213 16:14:44.482031 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.482039 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:44.482045 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:44.482103 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:44.507630 1542350 cri.go:89] found id: ""
	I1213 16:14:44.507654 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.507662 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:44.507668 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:44.507729 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:44.536680 1542350 cri.go:89] found id: ""
	I1213 16:14:44.536704 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.536713 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:44.536719 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:44.536778 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:44.565166 1542350 cri.go:89] found id: ""
	I1213 16:14:44.565189 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.565199 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:44.565205 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:44.565265 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:44.598174 1542350 cri.go:89] found id: ""
	I1213 16:14:44.598197 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.598206 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:44.598214 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:44.598280 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:44.640061 1542350 cri.go:89] found id: ""
	I1213 16:14:44.640084 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.640092 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:44.640099 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:44.640159 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:44.671940 1542350 cri.go:89] found id: ""
	I1213 16:14:44.671968 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.671976 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:44.671982 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:44.672044 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:44.698885 1542350 cri.go:89] found id: ""
	I1213 16:14:44.698907 1542350 logs.go:282] 0 containers: []
	W1213 16:14:44.698916 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:44.698925 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:44.698939 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:44.715019 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:44.715090 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:44.777959 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:44.769893    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.770508    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.772034    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.772476    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.773993    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:44.769893    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.770508    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.772034    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.772476    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:44.773993    5031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:44.777983 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:44.777996 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:44.803994 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:44.804031 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:44.835446 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:44.835476 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:47.402282 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:47.413184 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:47.413252 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:47.439678 1542350 cri.go:89] found id: ""
	I1213 16:14:47.439702 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.439710 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:47.439717 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:47.439777 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:47.469694 1542350 cri.go:89] found id: ""
	I1213 16:14:47.469720 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.469728 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:47.469734 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:47.469797 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:47.495280 1542350 cri.go:89] found id: ""
	I1213 16:14:47.495306 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.495339 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:47.495346 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:47.495408 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:47.525092 1542350 cri.go:89] found id: ""
	I1213 16:14:47.525118 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.525127 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:47.525133 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:47.525194 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:47.551755 1542350 cri.go:89] found id: ""
	I1213 16:14:47.551782 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.551790 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:47.551797 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:47.551858 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:47.577368 1542350 cri.go:89] found id: ""
	I1213 16:14:47.577393 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.577402 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:47.577408 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:47.577479 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:47.603993 1542350 cri.go:89] found id: ""
	I1213 16:14:47.604016 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.604024 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:47.604030 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:47.604095 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:47.634166 1542350 cri.go:89] found id: ""
	I1213 16:14:47.634188 1542350 logs.go:282] 0 containers: []
	W1213 16:14:47.634197 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:47.634206 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:47.634217 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:47.698875 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:47.698911 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:47.715548 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:47.715580 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:47.783485 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:47.774772    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.775432    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.777207    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.777881    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.779547    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:47.774772    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.775432    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.777207    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.777881    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:47.779547    5147 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:47.783508 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:47.783521 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:47.809639 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:47.809672 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:50.342353 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:50.355175 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:50.355303 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:50.381034 1542350 cri.go:89] found id: ""
	I1213 16:14:50.381066 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.381076 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:50.381084 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:50.381166 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:50.409181 1542350 cri.go:89] found id: ""
	I1213 16:14:50.409208 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.409217 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:50.409222 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:50.409286 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:50.438419 1542350 cri.go:89] found id: ""
	I1213 16:14:50.438451 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.438460 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:50.438466 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:50.438525 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:50.468687 1542350 cri.go:89] found id: ""
	I1213 16:14:50.468713 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.468721 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:50.468728 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:50.468789 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:50.498096 1542350 cri.go:89] found id: ""
	I1213 16:14:50.498163 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.498187 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:50.498205 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:50.498292 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:50.523754 1542350 cri.go:89] found id: ""
	I1213 16:14:50.523820 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.523835 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:50.523843 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:50.523902 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:50.555302 1542350 cri.go:89] found id: ""
	I1213 16:14:50.555387 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.555403 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:50.555410 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:50.555477 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:50.581005 1542350 cri.go:89] found id: ""
	I1213 16:14:50.581035 1542350 logs.go:282] 0 containers: []
	W1213 16:14:50.581044 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:50.581054 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:50.581067 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:50.611931 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:50.612005 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:50.650728 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:50.650754 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:50.709840 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:50.709878 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:50.729613 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:50.729711 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:50.796424 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:50.788003    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.788585    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.790191    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.790714    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.792323    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:50.788003    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.788585    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.790191    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.790714    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:50.792323    5271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:53.298328 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:53.309106 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:53.309178 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:53.333481 1542350 cri.go:89] found id: ""
	I1213 16:14:53.333513 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.333523 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:53.333529 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:53.333590 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:53.358898 1542350 cri.go:89] found id: ""
	I1213 16:14:53.358923 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.358932 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:53.358938 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:53.358999 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:53.384286 1542350 cri.go:89] found id: ""
	I1213 16:14:53.384311 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.384322 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:53.384329 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:53.384388 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:53.408999 1542350 cri.go:89] found id: ""
	I1213 16:14:53.409022 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.409031 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:53.409037 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:53.409102 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:53.437666 1542350 cri.go:89] found id: ""
	I1213 16:14:53.437688 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.437696 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:53.437703 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:53.437764 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:53.462775 1542350 cri.go:89] found id: ""
	I1213 16:14:53.462868 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.462885 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:53.462893 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:53.462955 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:53.489379 1542350 cri.go:89] found id: ""
	I1213 16:14:53.489403 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.489413 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:53.489419 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:53.489479 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:53.513660 1542350 cri.go:89] found id: ""
	I1213 16:14:53.513683 1542350 logs.go:282] 0 containers: []
	W1213 16:14:53.513691 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:53.513701 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:53.513711 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:53.544644 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:53.544670 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:53.603653 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:53.603733 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:53.620761 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:53.620846 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:53.694809 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:53.685787    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.686311    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.688155    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.688956    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.690579    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:53.685787    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.686311    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.688155    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.688956    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:53.690579    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:53.694871 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:53.694886 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:56.222442 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:56.233418 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:56.233521 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:56.262552 1542350 cri.go:89] found id: ""
	I1213 16:14:56.262578 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.262587 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:56.262594 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:56.262677 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:56.290583 1542350 cri.go:89] found id: ""
	I1213 16:14:56.290611 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.290620 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:56.290627 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:56.290778 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:56.316264 1542350 cri.go:89] found id: ""
	I1213 16:14:56.316292 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.316300 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:56.316306 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:56.316366 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:56.341047 1542350 cri.go:89] found id: ""
	I1213 16:14:56.341072 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.341080 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:56.341086 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:56.341163 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:56.369874 1542350 cri.go:89] found id: ""
	I1213 16:14:56.369909 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.369918 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:56.369924 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:56.369993 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:56.396373 1542350 cri.go:89] found id: ""
	I1213 16:14:56.396400 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.396408 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:56.396415 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:56.396480 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:56.421264 1542350 cri.go:89] found id: ""
	I1213 16:14:56.421286 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.421294 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:56.421300 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:56.421362 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:56.449683 1542350 cri.go:89] found id: ""
	I1213 16:14:56.449708 1542350 logs.go:282] 0 containers: []
	W1213 16:14:56.449717 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:56.449727 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:56.449740 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:56.513612 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:56.505286    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.506108    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.507846    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.508199    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.509740    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:56.505286    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.506108    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.507846    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.508199    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:56.509740    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:56.513635 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:56.513648 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:56.539159 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:56.539193 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:14:56.569885 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:56.569913 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:56.636667 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:56.636712 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:59.161215 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:14:59.172070 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:14:59.172139 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:14:59.196977 1542350 cri.go:89] found id: ""
	I1213 16:14:59.197003 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.197013 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:14:59.197019 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:14:59.197124 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:14:59.222813 1542350 cri.go:89] found id: ""
	I1213 16:14:59.222839 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.222849 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:14:59.222855 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:14:59.222921 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:14:59.249285 1542350 cri.go:89] found id: ""
	I1213 16:14:59.249309 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.249317 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:14:59.249323 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:14:59.249385 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:14:59.275052 1542350 cri.go:89] found id: ""
	I1213 16:14:59.275076 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.275085 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:14:59.275091 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:14:59.275152 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:14:59.301297 1542350 cri.go:89] found id: ""
	I1213 16:14:59.301323 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.301331 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:14:59.301337 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:14:59.301395 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:14:59.326556 1542350 cri.go:89] found id: ""
	I1213 16:14:59.326582 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.326591 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:14:59.326599 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:14:59.326658 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:14:59.360044 1542350 cri.go:89] found id: ""
	I1213 16:14:59.360070 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.360079 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:14:59.360085 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:14:59.360145 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:14:59.385355 1542350 cri.go:89] found id: ""
	I1213 16:14:59.385380 1542350 logs.go:282] 0 containers: []
	W1213 16:14:59.385389 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:14:59.385398 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:14:59.385410 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:14:59.441005 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:14:59.441040 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:14:59.456936 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:14:59.456968 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:14:59.523389 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:14:59.514843    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.515544    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.517163    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.517739    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.519447    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:14:59.514843    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.515544    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.517163    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.517739    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:14:59.519447    5598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:14:59.523410 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:14:59.523423 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:14:59.548680 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:14:59.548717 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:02.077266 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:02.091997 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:02.092082 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:02.125051 1542350 cri.go:89] found id: ""
	I1213 16:15:02.125079 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.125088 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:02.125095 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:02.125158 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:02.155518 1542350 cri.go:89] found id: ""
	I1213 16:15:02.155547 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.155555 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:02.155567 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:02.155626 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:02.180408 1542350 cri.go:89] found id: ""
	I1213 16:15:02.180435 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.180444 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:02.180450 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:02.180541 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:02.206923 1542350 cri.go:89] found id: ""
	I1213 16:15:02.206957 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.206966 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:02.206979 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:02.207049 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:02.234308 1542350 cri.go:89] found id: ""
	I1213 16:15:02.234332 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.234341 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:02.234347 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:02.234412 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:02.260647 1542350 cri.go:89] found id: ""
	I1213 16:15:02.260671 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.260680 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:02.260686 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:02.260746 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:02.287051 1542350 cri.go:89] found id: ""
	I1213 16:15:02.287075 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.287083 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:02.287089 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:02.287151 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:02.313703 1542350 cri.go:89] found id: ""
	I1213 16:15:02.313726 1542350 logs.go:282] 0 containers: []
	W1213 16:15:02.313734 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:02.313744 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:02.313755 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:02.369628 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:02.369663 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:02.385814 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:02.385896 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:02.450440 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:02.441569    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.442433    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.443449    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.445136    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.445445    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:02.441569    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.442433    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.443449    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.445136    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:02.445445    5710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:02.450460 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:02.450475 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:02.475994 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:02.476032 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:05.008952 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:05.023767 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:05.023852 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:05.048943 1542350 cri.go:89] found id: ""
	I1213 16:15:05.048970 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.048979 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:05.048985 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:05.049046 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:05.073030 1542350 cri.go:89] found id: ""
	I1213 16:15:05.073057 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.073066 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:05.073072 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:05.073141 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:05.113695 1542350 cri.go:89] found id: ""
	I1213 16:15:05.113724 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.113733 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:05.113739 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:05.113798 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:05.143435 1542350 cri.go:89] found id: ""
	I1213 16:15:05.143462 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.143471 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:05.143476 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:05.143533 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:05.169643 1542350 cri.go:89] found id: ""
	I1213 16:15:05.169672 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.169682 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:05.169694 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:05.169756 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:05.194836 1542350 cri.go:89] found id: ""
	I1213 16:15:05.194865 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.194874 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:05.194881 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:05.194939 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:05.223183 1542350 cri.go:89] found id: ""
	I1213 16:15:05.223208 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.223216 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:05.223223 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:05.223284 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:05.247344 1542350 cri.go:89] found id: ""
	I1213 16:15:05.247368 1542350 logs.go:282] 0 containers: []
	W1213 16:15:05.247377 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:05.247386 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:05.247400 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:05.302110 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:05.302144 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:05.318507 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:05.318537 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:05.383855 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:05.375536    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.376623    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.377525    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.378529    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.380053    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:05.375536    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.376623    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.377525    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.378529    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:05.380053    5823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:05.383878 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:05.383891 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:05.408947 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:05.408984 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:07.939749 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:07.950076 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:07.950150 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:07.975327 1542350 cri.go:89] found id: ""
	I1213 16:15:07.975351 1542350 logs.go:282] 0 containers: []
	W1213 16:15:07.975360 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:07.975366 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:07.975423 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:07.999830 1542350 cri.go:89] found id: ""
	I1213 16:15:07.999856 1542350 logs.go:282] 0 containers: []
	W1213 16:15:07.999864 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:07.999870 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:07.999928 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:08.026521 1542350 cri.go:89] found id: ""
	I1213 16:15:08.026547 1542350 logs.go:282] 0 containers: []
	W1213 16:15:08.026556 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:08.026562 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:08.026627 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:08.053320 1542350 cri.go:89] found id: ""
	I1213 16:15:08.053343 1542350 logs.go:282] 0 containers: []
	W1213 16:15:08.053352 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:08.053358 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:08.053418 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:08.084631 1542350 cri.go:89] found id: ""
	I1213 16:15:08.084654 1542350 logs.go:282] 0 containers: []
	W1213 16:15:08.084663 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:08.084669 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:08.084727 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:08.115761 1542350 cri.go:89] found id: ""
	I1213 16:15:08.115842 1542350 logs.go:282] 0 containers: []
	W1213 16:15:08.115866 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:08.115884 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:08.115992 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:08.143108 1542350 cri.go:89] found id: ""
	I1213 16:15:08.143131 1542350 logs.go:282] 0 containers: []
	W1213 16:15:08.143141 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:08.143150 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:08.143210 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:08.169485 1542350 cri.go:89] found id: ""
	I1213 16:15:08.169548 1542350 logs.go:282] 0 containers: []
	W1213 16:15:08.169571 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:08.169593 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:08.169632 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:08.186535 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:08.186608 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:08.254187 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:08.245289    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.245832    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.247630    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.248117    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.249849    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:08.245289    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.245832    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.247630    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.248117    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:08.249849    5938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:08.254252 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:08.254277 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:08.279498 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:08.279538 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:08.307012 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:08.307040 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:10.863431 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:10.875836 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:10.875902 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:10.902828 1542350 cri.go:89] found id: ""
	I1213 16:15:10.902850 1542350 logs.go:282] 0 containers: []
	W1213 16:15:10.902859 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:10.902864 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:10.902924 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:10.927709 1542350 cri.go:89] found id: ""
	I1213 16:15:10.927732 1542350 logs.go:282] 0 containers: []
	W1213 16:15:10.927741 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:10.927747 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:10.927807 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:10.952424 1542350 cri.go:89] found id: ""
	I1213 16:15:10.952448 1542350 logs.go:282] 0 containers: []
	W1213 16:15:10.952457 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:10.952466 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:10.952544 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:10.977056 1542350 cri.go:89] found id: ""
	I1213 16:15:10.977087 1542350 logs.go:282] 0 containers: []
	W1213 16:15:10.977095 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:10.977101 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:10.977163 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:11.006742 1542350 cri.go:89] found id: ""
	I1213 16:15:11.006767 1542350 logs.go:282] 0 containers: []
	W1213 16:15:11.006776 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:11.006782 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:11.006857 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:11.033448 1542350 cri.go:89] found id: ""
	I1213 16:15:11.033471 1542350 logs.go:282] 0 containers: []
	W1213 16:15:11.033481 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:11.033491 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:11.033549 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:11.058288 1542350 cri.go:89] found id: ""
	I1213 16:15:11.058319 1542350 logs.go:282] 0 containers: []
	W1213 16:15:11.058329 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:11.058335 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:11.058403 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:11.086206 1542350 cri.go:89] found id: ""
	I1213 16:15:11.086229 1542350 logs.go:282] 0 containers: []
	W1213 16:15:11.086238 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:11.086248 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:11.086260 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:11.149204 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:11.149250 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:11.169208 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:11.169240 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:11.239824 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:11.230926    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.231789    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.233414    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.233896    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.235615    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:11.230926    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.231789    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.233414    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.233896    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:11.235615    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:11.239888 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:11.239913 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:11.265156 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:11.265190 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:13.793650 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:13.804879 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:13.804957 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:13.830496 1542350 cri.go:89] found id: ""
	I1213 16:15:13.830524 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.830534 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:13.830541 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:13.830598 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:13.860289 1542350 cri.go:89] found id: ""
	I1213 16:15:13.860316 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.860325 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:13.860331 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:13.860404 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:13.889862 1542350 cri.go:89] found id: ""
	I1213 16:15:13.889900 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.889909 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:13.889915 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:13.889982 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:13.917096 1542350 cri.go:89] found id: ""
	I1213 16:15:13.917119 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.917127 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:13.917134 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:13.917192 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:13.941374 1542350 cri.go:89] found id: ""
	I1213 16:15:13.941397 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.941406 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:13.941412 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:13.941472 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:13.966429 1542350 cri.go:89] found id: ""
	I1213 16:15:13.966457 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.966467 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:13.966474 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:13.966536 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:13.992124 1542350 cri.go:89] found id: ""
	I1213 16:15:13.992193 1542350 logs.go:282] 0 containers: []
	W1213 16:15:13.992217 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:13.992231 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:13.992304 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:14.018581 1542350 cri.go:89] found id: ""
	I1213 16:15:14.018613 1542350 logs.go:282] 0 containers: []
	W1213 16:15:14.018621 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:14.018631 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:14.018643 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:14.076560 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:14.076594 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:14.093391 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:14.093470 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:14.169809 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:14.161020    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.162081    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.163786    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.164233    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.165949    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:14.161020    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.162081    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.163786    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.164233    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:14.165949    6167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:14.169831 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:14.169844 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:14.196553 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:14.196588 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:16.730383 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:16.741020 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:16.741091 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:16.765402 1542350 cri.go:89] found id: ""
	I1213 16:15:16.765425 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.765434 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:16.765440 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:16.765498 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:16.791004 1542350 cri.go:89] found id: ""
	I1213 16:15:16.791033 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.791042 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:16.791048 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:16.791112 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:16.816897 1542350 cri.go:89] found id: ""
	I1213 16:15:16.816925 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.816933 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:16.816939 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:16.817002 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:16.861774 1542350 cri.go:89] found id: ""
	I1213 16:15:16.861796 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.861803 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:16.861809 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:16.861868 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:16.895555 1542350 cri.go:89] found id: ""
	I1213 16:15:16.895575 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.895584 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:16.895589 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:16.895650 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:16.923607 1542350 cri.go:89] found id: ""
	I1213 16:15:16.923630 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.923638 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:16.923644 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:16.923705 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:16.952569 1542350 cri.go:89] found id: ""
	I1213 16:15:16.952602 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.952612 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:16.952618 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:16.952681 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:16.982597 1542350 cri.go:89] found id: ""
	I1213 16:15:16.982625 1542350 logs.go:282] 0 containers: []
	W1213 16:15:16.982634 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:16.982644 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:16.982657 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:17.040379 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:17.040417 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:17.056673 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:17.056703 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:17.155960 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:17.135813    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.147685    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.148473    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.150347    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.150872    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:17.135813    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.147685    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.148473    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.150347    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:17.150872    6282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:17.155984 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:17.155997 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:17.181703 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:17.181742 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:19.710412 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:19.723576 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:19.723654 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:19.752079 1542350 cri.go:89] found id: ""
	I1213 16:15:19.752102 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.752111 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:19.752117 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:19.752198 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:19.776763 1542350 cri.go:89] found id: ""
	I1213 16:15:19.776829 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.776845 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:19.776853 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:19.776912 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:19.803069 1542350 cri.go:89] found id: ""
	I1213 16:15:19.803133 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.803149 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:19.803157 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:19.803216 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:19.828299 1542350 cri.go:89] found id: ""
	I1213 16:15:19.828332 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.828342 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:19.828348 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:19.828419 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:19.858915 1542350 cri.go:89] found id: ""
	I1213 16:15:19.858992 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.859013 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:19.859032 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:19.859127 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:19.889950 1542350 cri.go:89] found id: ""
	I1213 16:15:19.889987 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.889996 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:19.890003 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:19.890076 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:19.915855 1542350 cri.go:89] found id: ""
	I1213 16:15:19.915879 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.915893 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:19.915899 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:19.915958 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:19.945371 1542350 cri.go:89] found id: ""
	I1213 16:15:19.945409 1542350 logs.go:282] 0 containers: []
	W1213 16:15:19.945418 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:19.945460 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:19.945484 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:20.004545 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:20.004586 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:20.030075 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:20.030110 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:20.119134 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:20.105779    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.107335    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.108625    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.110278    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.111655    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:20.105779    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.107335    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.108625    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.110278    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:20.111655    6393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:20.119228 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:20.119426 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:20.157972 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:20.158017 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:22.690836 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:22.701577 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:22.701651 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:22.725883 1542350 cri.go:89] found id: ""
	I1213 16:15:22.725908 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.725917 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:22.725922 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:22.725980 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:22.750347 1542350 cri.go:89] found id: ""
	I1213 16:15:22.750373 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.750382 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:22.750388 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:22.750446 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:22.773604 1542350 cri.go:89] found id: ""
	I1213 16:15:22.773627 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.773636 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:22.773642 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:22.773699 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:22.798122 1542350 cri.go:89] found id: ""
	I1213 16:15:22.798144 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.798153 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:22.798159 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:22.798216 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:22.825364 1542350 cri.go:89] found id: ""
	I1213 16:15:22.825386 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.825394 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:22.825400 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:22.825463 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:22.860458 1542350 cri.go:89] found id: ""
	I1213 16:15:22.860480 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.860489 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:22.860503 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:22.860560 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:22.888782 1542350 cri.go:89] found id: ""
	I1213 16:15:22.888865 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.888889 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:22.888907 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:22.888991 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:22.917264 1542350 cri.go:89] found id: ""
	I1213 16:15:22.917288 1542350 logs.go:282] 0 containers: []
	W1213 16:15:22.917297 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:22.917306 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:22.917318 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:22.947808 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:22.947850 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:23.002868 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:23.002910 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:23.019957 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:23.019988 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:23.095906 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:23.076845    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.077548    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.079269    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.079937    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.083952    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:23.076845    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.077548    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.079269    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.079937    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:23.083952    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:23.095985 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:23.096017 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:25.625418 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:25.636179 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:25.636256 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:25.660796 1542350 cri.go:89] found id: ""
	I1213 16:15:25.660819 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.660827 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:25.660833 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:25.660890 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:25.692137 1542350 cri.go:89] found id: ""
	I1213 16:15:25.692161 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.692169 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:25.692175 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:25.692234 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:25.722645 1542350 cri.go:89] found id: ""
	I1213 16:15:25.722667 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.722677 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:25.722683 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:25.722741 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:25.746597 1542350 cri.go:89] found id: ""
	I1213 16:15:25.746619 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.746627 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:25.746633 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:25.746690 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:25.773364 1542350 cri.go:89] found id: ""
	I1213 16:15:25.773391 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.773399 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:25.773405 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:25.773464 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:25.798024 1542350 cri.go:89] found id: ""
	I1213 16:15:25.798047 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.798056 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:25.798062 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:25.798140 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:25.824949 1542350 cri.go:89] found id: ""
	I1213 16:15:25.824975 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.824984 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:25.824989 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:25.825065 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:25.851736 1542350 cri.go:89] found id: ""
	I1213 16:15:25.851809 1542350 logs.go:282] 0 containers: []
	W1213 16:15:25.851843 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:25.851869 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:25.851910 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:25.868875 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:25.868902 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:25.941457 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:25.933124    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.933785    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.935483    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.935919    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.937566    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:25.933124    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.933785    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.935483    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.935919    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:25.937566    6620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:25.941527 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:25.941548 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:25.966625 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:25.966656 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:25.996976 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:25.997004 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:28.556122 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:28.567257 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:28.567352 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:28.592087 1542350 cri.go:89] found id: ""
	I1213 16:15:28.592153 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.592179 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:28.592196 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:28.592293 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:28.616658 1542350 cri.go:89] found id: ""
	I1213 16:15:28.616731 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.616746 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:28.616753 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:28.616822 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:28.640310 1542350 cri.go:89] found id: ""
	I1213 16:15:28.640335 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.640344 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:28.640349 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:28.640412 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:28.665406 1542350 cri.go:89] found id: ""
	I1213 16:15:28.665433 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.665443 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:28.665449 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:28.665508 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:28.690028 1542350 cri.go:89] found id: ""
	I1213 16:15:28.690090 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.690121 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:28.690143 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:28.690247 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:28.714656 1542350 cri.go:89] found id: ""
	I1213 16:15:28.714719 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.714753 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:28.714775 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:28.714862 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:28.741721 1542350 cri.go:89] found id: ""
	I1213 16:15:28.741745 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.741753 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:28.741759 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:28.741860 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:28.770039 1542350 cri.go:89] found id: ""
	I1213 16:15:28.770106 1542350 logs.go:282] 0 containers: []
	W1213 16:15:28.770132 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:28.770153 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:28.770191 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:28.794482 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:28.794514 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:28.825722 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:28.825751 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:28.885792 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:28.885826 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:28.902629 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:28.902658 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:28.968699 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:28.960883    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.961407    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.962907    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.963230    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.964863    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:28.960883    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.961407    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.962907    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.963230    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:28.964863    6751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:31.469803 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:31.480479 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:31.480600 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:31.512783 1542350 cri.go:89] found id: ""
	I1213 16:15:31.512807 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.512816 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:31.512823 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:31.512881 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:31.539773 1542350 cri.go:89] found id: ""
	I1213 16:15:31.539800 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.539815 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:31.539836 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:31.539915 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:31.564690 1542350 cri.go:89] found id: ""
	I1213 16:15:31.564715 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.564723 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:31.564729 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:31.564791 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:31.589449 1542350 cri.go:89] found id: ""
	I1213 16:15:31.589476 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.589484 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:31.589490 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:31.589550 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:31.614171 1542350 cri.go:89] found id: ""
	I1213 16:15:31.614203 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.614212 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:31.614218 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:31.614278 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:31.641466 1542350 cri.go:89] found id: ""
	I1213 16:15:31.641489 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.641498 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:31.641505 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:31.641563 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:31.665618 1542350 cri.go:89] found id: ""
	I1213 16:15:31.665641 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.665649 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:31.665656 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:31.665715 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:31.694436 1542350 cri.go:89] found id: ""
	I1213 16:15:31.694531 1542350 logs.go:282] 0 containers: []
	W1213 16:15:31.694554 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:31.694589 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:31.694621 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:31.720014 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:31.720047 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:31.746773 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:31.746844 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:31.802034 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:31.802070 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:31.819067 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:31.819096 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:31.926406 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:31.917414    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.918134    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.919118    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.920759    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.921377    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:31.917414    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.918134    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.919118    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.920759    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:31.921377    6862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:34.427501 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:34.438467 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:34.438539 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:34.469663 1542350 cri.go:89] found id: ""
	I1213 16:15:34.469685 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.469693 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:34.469699 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:34.469763 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:34.497352 1542350 cri.go:89] found id: ""
	I1213 16:15:34.497375 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.497384 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:34.497391 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:34.497449 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:34.522437 1542350 cri.go:89] found id: ""
	I1213 16:15:34.522462 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.522471 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:34.522477 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:34.522533 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:34.546310 1542350 cri.go:89] found id: ""
	I1213 16:15:34.546335 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.546344 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:34.546350 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:34.546410 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:34.570057 1542350 cri.go:89] found id: ""
	I1213 16:15:34.570082 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.570091 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:34.570097 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:34.570154 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:34.597335 1542350 cri.go:89] found id: ""
	I1213 16:15:34.597360 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.597369 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:34.597375 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:34.597438 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:34.622402 1542350 cri.go:89] found id: ""
	I1213 16:15:34.622426 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.622435 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:34.622441 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:34.622501 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:34.647379 1542350 cri.go:89] found id: ""
	I1213 16:15:34.647405 1542350 logs.go:282] 0 containers: []
	W1213 16:15:34.647414 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:34.647423 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:34.647435 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:34.707433 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:34.699526    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.700203    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.701513    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.701944    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.703518    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:34.699526    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.700203    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.701513    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.701944    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:34.703518    6955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:34.707452 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:34.707464 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:34.732617 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:34.732650 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:34.760551 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:34.760579 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:34.817043 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:34.817078 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:37.335446 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:37.346358 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:37.346480 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:37.375693 1542350 cri.go:89] found id: ""
	I1213 16:15:37.375763 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.375784 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:37.375803 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:37.375896 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:37.401729 1542350 cri.go:89] found id: ""
	I1213 16:15:37.401753 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.401761 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:37.401768 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:37.401832 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:37.426557 1542350 cri.go:89] found id: ""
	I1213 16:15:37.426583 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.426591 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:37.426597 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:37.426659 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:37.452633 1542350 cri.go:89] found id: ""
	I1213 16:15:37.452658 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.452666 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:37.452672 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:37.452731 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:37.476262 1542350 cri.go:89] found id: ""
	I1213 16:15:37.476287 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.476296 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:37.476302 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:37.476388 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:37.501165 1542350 cri.go:89] found id: ""
	I1213 16:15:37.501190 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.501198 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:37.501204 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:37.501285 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:37.524960 1542350 cri.go:89] found id: ""
	I1213 16:15:37.524983 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.524991 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:37.524997 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:37.525055 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:37.550053 1542350 cri.go:89] found id: ""
	I1213 16:15:37.550079 1542350 logs.go:282] 0 containers: []
	W1213 16:15:37.550088 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:37.550097 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:37.550109 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:37.613799 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:37.604980    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.605842    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.607596    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.608285    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.609930    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:37.604980    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.605842    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.607596    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.608285    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:37.609930    7070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:37.613824 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:37.613837 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:37.638525 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:37.638559 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:37.665937 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:37.665965 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:37.722593 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:37.722628 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:40.238420 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:40.249230 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:40.249314 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:40.273014 1542350 cri.go:89] found id: ""
	I1213 16:15:40.273089 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.273133 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:40.273147 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:40.273227 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:40.298488 1542350 cri.go:89] found id: ""
	I1213 16:15:40.298553 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.298577 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:40.298595 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:40.298679 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:40.323131 1542350 cri.go:89] found id: ""
	I1213 16:15:40.323204 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.323228 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:40.323246 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:40.323368 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:40.360968 1542350 cri.go:89] found id: ""
	I1213 16:15:40.360996 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.361005 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:40.361011 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:40.361081 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:40.392530 1542350 cri.go:89] found id: ""
	I1213 16:15:40.392564 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.392573 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:40.392580 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:40.392648 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:40.428563 1542350 cri.go:89] found id: ""
	I1213 16:15:40.428588 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.428597 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:40.428603 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:40.428686 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:40.453234 1542350 cri.go:89] found id: ""
	I1213 16:15:40.453259 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.453267 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:40.453274 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:40.453373 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:40.477074 1542350 cri.go:89] found id: ""
	I1213 16:15:40.477099 1542350 logs.go:282] 0 containers: []
	W1213 16:15:40.477108 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:40.477117 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:40.477144 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:40.503301 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:40.503521 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:40.537464 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:40.537493 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:40.593489 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:40.593526 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:40.609479 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:40.609507 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:40.674540 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:40.665852    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.666546    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.668621    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.669211    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.670710    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:40.665852    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.666546    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.668621    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.669211    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:40.670710    7198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:43.175524 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:43.186492 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:43.186570 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:43.210685 1542350 cri.go:89] found id: ""
	I1213 16:15:43.210712 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.210721 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:43.210728 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:43.210787 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:43.237076 1542350 cri.go:89] found id: ""
	I1213 16:15:43.237103 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.237112 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:43.237118 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:43.237177 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:43.264682 1542350 cri.go:89] found id: ""
	I1213 16:15:43.264756 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.264771 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:43.264778 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:43.264842 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:43.290869 1542350 cri.go:89] found id: ""
	I1213 16:15:43.290896 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.290905 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:43.290912 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:43.290976 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:43.316279 1542350 cri.go:89] found id: ""
	I1213 16:15:43.316306 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.316315 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:43.316322 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:43.316383 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:43.354838 1542350 cri.go:89] found id: ""
	I1213 16:15:43.354864 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.354873 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:43.354880 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:43.354957 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:43.391172 1542350 cri.go:89] found id: ""
	I1213 16:15:43.391198 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.391207 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:43.391213 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:43.391274 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:43.418613 1542350 cri.go:89] found id: ""
	I1213 16:15:43.418647 1542350 logs.go:282] 0 containers: []
	W1213 16:15:43.418657 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:43.418667 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:43.418680 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:43.435343 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:43.435384 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:43.503984 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:43.495327    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.495856    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.497696    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.498105    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.499619    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:43.495327    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.495856    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.497696    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.498105    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:43.499619    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:43.504005 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:43.504018 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:43.530844 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:43.530882 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:43.563046 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:43.563079 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:46.121764 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:46.133205 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:46.133278 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:46.159902 1542350 cri.go:89] found id: ""
	I1213 16:15:46.159926 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.159935 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:46.159941 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:46.160016 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:46.189203 1542350 cri.go:89] found id: ""
	I1213 16:15:46.189236 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.189260 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:46.189267 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:46.189336 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:46.214186 1542350 cri.go:89] found id: ""
	I1213 16:15:46.214208 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.214216 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:46.214222 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:46.214281 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:46.244894 1542350 cri.go:89] found id: ""
	I1213 16:15:46.244923 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.244943 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:46.244949 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:46.245015 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:46.270668 1542350 cri.go:89] found id: ""
	I1213 16:15:46.270693 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.270702 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:46.270708 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:46.270771 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:46.296520 1542350 cri.go:89] found id: ""
	I1213 16:15:46.296565 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.296595 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:46.296603 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:46.296684 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:46.322387 1542350 cri.go:89] found id: ""
	I1213 16:15:46.322410 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.322418 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:46.322424 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:46.322492 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:46.359071 1542350 cri.go:89] found id: ""
	I1213 16:15:46.359093 1542350 logs.go:282] 0 containers: []
	W1213 16:15:46.359102 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:46.359111 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:46.359121 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:46.397696 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:46.397772 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:46.453341 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:46.453386 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:46.469917 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:46.469945 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:46.531639 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:46.523791    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.524434    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.526137    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.526426    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.527840    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:46.523791    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.524434    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.526137    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.526426    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:46.527840    7420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:46.531665 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:46.531678 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:49.058136 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:49.069039 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:49.069109 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:49.103600 1542350 cri.go:89] found id: ""
	I1213 16:15:49.103622 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.103630 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:49.103637 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:49.103694 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:49.133756 1542350 cri.go:89] found id: ""
	I1213 16:15:49.133778 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.133787 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:49.133793 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:49.133850 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:49.159824 1542350 cri.go:89] found id: ""
	I1213 16:15:49.159847 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.159856 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:49.159862 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:49.159919 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:49.188461 1542350 cri.go:89] found id: ""
	I1213 16:15:49.188527 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.188567 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:49.188598 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:49.188677 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:49.212316 1542350 cri.go:89] found id: ""
	I1213 16:15:49.212338 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.212346 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:49.212352 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:49.212424 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:49.236324 1542350 cri.go:89] found id: ""
	I1213 16:15:49.236348 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.236356 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:49.236362 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:49.236423 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:49.262438 1542350 cri.go:89] found id: ""
	I1213 16:15:49.262475 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.262484 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:49.262491 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:49.262578 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:49.292613 1542350 cri.go:89] found id: ""
	I1213 16:15:49.292637 1542350 logs.go:282] 0 containers: []
	W1213 16:15:49.292646 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:49.292655 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:49.292667 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:49.350224 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:49.350260 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:49.367633 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:49.367661 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:49.436081 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:49.427382    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.428117    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.429891    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.430411    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.431982    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:49.427382    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.428117    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.429891    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.430411    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:49.431982    7520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:49.436102 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:49.436115 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:49.461438 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:49.461474 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:51.994161 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:52.005864 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:52.005962 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:52.032002 1542350 cri.go:89] found id: ""
	I1213 16:15:52.032027 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.032052 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:52.032059 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:52.032118 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:52.058529 1542350 cri.go:89] found id: ""
	I1213 16:15:52.058552 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.058561 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:52.058567 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:52.058627 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:52.085765 1542350 cri.go:89] found id: ""
	I1213 16:15:52.085787 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.085795 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:52.085802 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:52.085860 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:52.113317 1542350 cri.go:89] found id: ""
	I1213 16:15:52.113389 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.113411 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:52.113430 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:52.113512 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:52.144343 1542350 cri.go:89] found id: ""
	I1213 16:15:52.144364 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.144373 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:52.144379 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:52.144450 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:52.170804 1542350 cri.go:89] found id: ""
	I1213 16:15:52.170876 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.170899 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:52.170916 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:52.171015 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:52.200043 1542350 cri.go:89] found id: ""
	I1213 16:15:52.200114 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.200137 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:52.200155 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:52.200254 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:52.226948 1542350 cri.go:89] found id: ""
	I1213 16:15:52.227022 1542350 logs.go:282] 0 containers: []
	W1213 16:15:52.227057 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:52.227086 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:52.227120 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:52.282092 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:52.282131 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:52.298201 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:52.298227 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:52.381110 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:52.372691    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.373232    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.374717    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.375078    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.376590    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:52.372691    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.373232    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.374717    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.375078    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:52.376590    7628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:52.381134 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:52.381148 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:52.409962 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:52.409994 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:54.942176 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:54.952757 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:54.952836 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:54.977644 1542350 cri.go:89] found id: ""
	I1213 16:15:54.977669 1542350 logs.go:282] 0 containers: []
	W1213 16:15:54.977678 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:54.977684 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:54.977742 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:55.005694 1542350 cri.go:89] found id: ""
	I1213 16:15:55.005722 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.005732 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:55.005740 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:55.005814 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:55.038377 1542350 cri.go:89] found id: ""
	I1213 16:15:55.038411 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.038422 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:55.038428 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:55.038493 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:55.065383 1542350 cri.go:89] found id: ""
	I1213 16:15:55.065417 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.065426 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:55.065433 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:55.065493 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:55.099813 1542350 cri.go:89] found id: ""
	I1213 16:15:55.099841 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.099850 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:55.099856 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:55.099931 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:55.128346 1542350 cri.go:89] found id: ""
	I1213 16:15:55.128368 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.128380 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:55.128387 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:55.128456 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:55.160925 1542350 cri.go:89] found id: ""
	I1213 16:15:55.160957 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.160966 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:55.160973 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:55.161037 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:55.188105 1542350 cri.go:89] found id: ""
	I1213 16:15:55.188132 1542350 logs.go:282] 0 containers: []
	W1213 16:15:55.188141 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:55.188151 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:55.188164 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:15:55.218869 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:55.218893 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:55.274258 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:55.274294 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:55.290251 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:55.290280 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:55.359521 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:55.350886    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.351709    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.353380    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.353940    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.355564    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:55.350886    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.351709    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.353380    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.353940    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:55.355564    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:55.359543 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:55.359556 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:57.887804 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:15:57.898226 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:15:57.898297 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:15:57.922697 1542350 cri.go:89] found id: ""
	I1213 16:15:57.922723 1542350 logs.go:282] 0 containers: []
	W1213 16:15:57.922732 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:15:57.922740 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:15:57.922821 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:15:57.947431 1542350 cri.go:89] found id: ""
	I1213 16:15:57.947457 1542350 logs.go:282] 0 containers: []
	W1213 16:15:57.947467 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:15:57.947473 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:15:57.947532 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:15:57.971494 1542350 cri.go:89] found id: ""
	I1213 16:15:57.971557 1542350 logs.go:282] 0 containers: []
	W1213 16:15:57.971582 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:15:57.971601 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:15:57.971679 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:15:57.999470 1542350 cri.go:89] found id: ""
	I1213 16:15:57.999495 1542350 logs.go:282] 0 containers: []
	W1213 16:15:57.999504 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:15:57.999510 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:15:57.999572 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:15:58.028740 1542350 cri.go:89] found id: ""
	I1213 16:15:58.028767 1542350 logs.go:282] 0 containers: []
	W1213 16:15:58.028777 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:15:58.028783 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:15:58.028849 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:15:58.054022 1542350 cri.go:89] found id: ""
	I1213 16:15:58.054043 1542350 logs.go:282] 0 containers: []
	W1213 16:15:58.054053 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:15:58.054059 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:15:58.054121 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:15:58.096720 1542350 cri.go:89] found id: ""
	I1213 16:15:58.096749 1542350 logs.go:282] 0 containers: []
	W1213 16:15:58.096758 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:15:58.096765 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:15:58.096825 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:15:58.133084 1542350 cri.go:89] found id: ""
	I1213 16:15:58.133114 1542350 logs.go:282] 0 containers: []
	W1213 16:15:58.133123 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:15:58.133133 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:15:58.133144 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:15:58.198401 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:15:58.198437 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:15:58.216601 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:15:58.216683 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:15:58.288456 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:15:58.279720    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.280528    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.282071    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.282726    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.283743    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:15:58.279720    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.280528    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.282071    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.282726    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:15:58.283743    7852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:15:58.288523 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:15:58.288544 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:15:58.314432 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:15:58.314470 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:00.851874 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:00.862470 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:00.862540 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:00.886360 1542350 cri.go:89] found id: ""
	I1213 16:16:00.886384 1542350 logs.go:282] 0 containers: []
	W1213 16:16:00.886392 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:00.886398 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:00.886458 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:00.910826 1542350 cri.go:89] found id: ""
	I1213 16:16:00.910851 1542350 logs.go:282] 0 containers: []
	W1213 16:16:00.910861 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:00.910867 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:00.910925 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:00.935111 1542350 cri.go:89] found id: ""
	I1213 16:16:00.935141 1542350 logs.go:282] 0 containers: []
	W1213 16:16:00.935150 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:00.935156 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:00.935214 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:00.960959 1542350 cri.go:89] found id: ""
	I1213 16:16:00.960982 1542350 logs.go:282] 0 containers: []
	W1213 16:16:00.960991 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:00.960997 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:00.961057 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:00.985954 1542350 cri.go:89] found id: ""
	I1213 16:16:00.985977 1542350 logs.go:282] 0 containers: []
	W1213 16:16:00.985986 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:00.985991 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:00.986052 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:01.011865 1542350 cri.go:89] found id: ""
	I1213 16:16:01.011889 1542350 logs.go:282] 0 containers: []
	W1213 16:16:01.011897 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:01.011903 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:01.011966 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:01.041391 1542350 cri.go:89] found id: ""
	I1213 16:16:01.041412 1542350 logs.go:282] 0 containers: []
	W1213 16:16:01.041421 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:01.041427 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:01.041486 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:01.065980 1542350 cri.go:89] found id: ""
	I1213 16:16:01.066001 1542350 logs.go:282] 0 containers: []
	W1213 16:16:01.066010 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:01.066020 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:01.066035 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:01.125520 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:01.125602 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:01.143155 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:01.143228 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:01.224569 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:01.213708    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.214378    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.216451    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.217392    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.219480    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:01.213708    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.214378    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.216451    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.217392    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:01.219480    7962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:01.224588 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:01.224602 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:01.251006 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:01.251045 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:03.780250 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:03.794327 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:03.794399 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:03.819181 1542350 cri.go:89] found id: ""
	I1213 16:16:03.819209 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.819218 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:03.819224 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:03.819285 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:03.845225 1542350 cri.go:89] found id: ""
	I1213 16:16:03.845248 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.845257 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:03.845264 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:03.845324 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:03.873944 1542350 cri.go:89] found id: ""
	I1213 16:16:03.873966 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.873975 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:03.873981 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:03.874042 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:03.899655 1542350 cri.go:89] found id: ""
	I1213 16:16:03.899685 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.899694 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:03.899701 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:03.899763 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:03.927094 1542350 cri.go:89] found id: ""
	I1213 16:16:03.927122 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.927131 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:03.927137 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:03.927196 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:03.952240 1542350 cri.go:89] found id: ""
	I1213 16:16:03.952267 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.952276 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:03.952282 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:03.952340 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:03.976494 1542350 cri.go:89] found id: ""
	I1213 16:16:03.976520 1542350 logs.go:282] 0 containers: []
	W1213 16:16:03.976529 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:03.976535 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:03.976605 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:04.001277 1542350 cri.go:89] found id: ""
	I1213 16:16:04.001304 1542350 logs.go:282] 0 containers: []
	W1213 16:16:04.001313 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:04.001324 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:04.001339 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:04.061393 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:04.061428 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:04.078258 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:04.078290 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:04.162687 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:04.153708    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.154478    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.156333    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.156798    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.158424    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:04.153708    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.154478    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.156333    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.156798    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:04.158424    8069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:04.162710 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:04.162723 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:04.187844 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:04.187879 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:06.716865 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:06.727125 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:06.727193 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:06.752991 1542350 cri.go:89] found id: ""
	I1213 16:16:06.753015 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.753024 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:06.753030 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:06.753089 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:06.777092 1542350 cri.go:89] found id: ""
	I1213 16:16:06.777116 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.777125 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:06.777130 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:06.777188 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:06.805182 1542350 cri.go:89] found id: ""
	I1213 16:16:06.805256 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.805278 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:06.805292 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:06.805363 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:06.833454 1542350 cri.go:89] found id: ""
	I1213 16:16:06.833477 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.833486 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:06.833492 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:06.833553 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:06.864279 1542350 cri.go:89] found id: ""
	I1213 16:16:06.864303 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.864311 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:06.864318 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:06.864379 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:06.889879 1542350 cri.go:89] found id: ""
	I1213 16:16:06.889905 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.889914 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:06.889920 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:06.889980 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:06.913566 1542350 cri.go:89] found id: ""
	I1213 16:16:06.913600 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.913609 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:06.913615 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:06.913682 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:06.939090 1542350 cri.go:89] found id: ""
	I1213 16:16:06.939161 1542350 logs.go:282] 0 containers: []
	W1213 16:16:06.939199 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:06.939226 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:06.939253 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:06.994546 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:06.994587 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:07.012062 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:07.012099 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:07.079574 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:07.070333    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.070833    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.072777    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.073334    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.074988    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:07.070333    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.070833    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.072777    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.073334    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:07.074988    8178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:07.079597 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:07.079609 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:07.106688 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:07.106772 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:09.648446 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:09.659497 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:09.659572 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:09.685004 1542350 cri.go:89] found id: ""
	I1213 16:16:09.685031 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.685040 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:09.685047 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:09.685106 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:09.710322 1542350 cri.go:89] found id: ""
	I1213 16:16:09.710350 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.710359 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:09.710365 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:09.710424 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:09.736183 1542350 cri.go:89] found id: ""
	I1213 16:16:09.736209 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.736218 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:09.736225 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:09.736328 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:09.761808 1542350 cri.go:89] found id: ""
	I1213 16:16:09.761831 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.761839 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:09.761846 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:09.761907 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:09.788666 1542350 cri.go:89] found id: ""
	I1213 16:16:09.788690 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.788699 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:09.788705 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:09.788767 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:09.815565 1542350 cri.go:89] found id: ""
	I1213 16:16:09.815590 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.815598 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:09.815604 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:09.815663 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:09.841443 1542350 cri.go:89] found id: ""
	I1213 16:16:09.841466 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.841475 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:09.841481 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:09.841538 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:09.870775 1542350 cri.go:89] found id: ""
	I1213 16:16:09.870798 1542350 logs.go:282] 0 containers: []
	W1213 16:16:09.870806 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:09.870818 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:09.870829 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:09.927243 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:09.927279 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:09.944116 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:09.944150 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:10.018299 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:10.008262    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.009034    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.011237    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.011729    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.013703    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:10.008262    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.009034    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.011237    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.011729    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:10.013703    8292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:10.018334 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:10.018348 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:10.062337 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:10.062384 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:12.610748 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:12.622191 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:12.622266 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:12.654912 1542350 cri.go:89] found id: ""
	I1213 16:16:12.654939 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.654948 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:12.654955 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:12.655017 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:12.679878 1542350 cri.go:89] found id: ""
	I1213 16:16:12.679904 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.679913 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:12.679919 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:12.679981 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:12.708594 1542350 cri.go:89] found id: ""
	I1213 16:16:12.708619 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.708628 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:12.708641 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:12.708703 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:12.734832 1542350 cri.go:89] found id: ""
	I1213 16:16:12.734857 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.734866 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:12.734872 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:12.734931 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:12.760756 1542350 cri.go:89] found id: ""
	I1213 16:16:12.760784 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.760793 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:12.760799 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:12.760860 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:12.786434 1542350 cri.go:89] found id: ""
	I1213 16:16:12.786470 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.786479 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:12.786486 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:12.786558 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:12.810666 1542350 cri.go:89] found id: ""
	I1213 16:16:12.810699 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.810708 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:12.810714 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:12.810779 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:12.835161 1542350 cri.go:89] found id: ""
	I1213 16:16:12.835206 1542350 logs.go:282] 0 containers: []
	W1213 16:16:12.835216 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:12.835225 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:12.835238 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:12.851412 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:12.851438 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:12.919002 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:12.910126    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.910878    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.912652    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.913309    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.915168    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:12.910126    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.910878    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.912652    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.913309    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:12.915168    8405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:12.919032 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:12.919045 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:12.945016 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:12.945054 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:12.975303 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:12.975353 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:15.533437 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:15.545434 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:15.545514 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:15.570277 1542350 cri.go:89] found id: ""
	I1213 16:16:15.570303 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.570353 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:15.570362 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:15.570427 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:15.602983 1542350 cri.go:89] found id: ""
	I1213 16:16:15.603009 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.603017 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:15.603023 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:15.603082 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:15.631137 1542350 cri.go:89] found id: ""
	I1213 16:16:15.631172 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.631181 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:15.631187 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:15.631245 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:15.664783 1542350 cri.go:89] found id: ""
	I1213 16:16:15.664810 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.664819 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:15.664825 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:15.664886 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:15.691237 1542350 cri.go:89] found id: ""
	I1213 16:16:15.691264 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.691274 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:15.691280 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:15.691368 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:15.715449 1542350 cri.go:89] found id: ""
	I1213 16:16:15.715473 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.715482 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:15.715489 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:15.715553 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:15.740667 1542350 cri.go:89] found id: ""
	I1213 16:16:15.740692 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.740701 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:15.740707 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:15.740770 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:15.765160 1542350 cri.go:89] found id: ""
	I1213 16:16:15.765182 1542350 logs.go:282] 0 containers: []
	W1213 16:16:15.765191 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:15.765200 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:15.765212 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:15.820427 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:15.820466 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:15.836513 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:15.836541 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:15.903389 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:15.894908    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.895741    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.897308    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.897848    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.899415    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:15.894908    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.895741    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.897308    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.897848    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:15.899415    8520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:15.903412 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:15.903427 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:15.928787 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:15.928825 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:18.458780 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:18.469268 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:18.469341 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:18.497781 1542350 cri.go:89] found id: ""
	I1213 16:16:18.497811 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.497824 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:18.497831 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:18.497918 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:18.522772 1542350 cri.go:89] found id: ""
	I1213 16:16:18.522799 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.522808 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:18.522815 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:18.522874 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:18.549419 1542350 cri.go:89] found id: ""
	I1213 16:16:18.549443 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.549452 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:18.549457 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:18.549524 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:18.573853 1542350 cri.go:89] found id: ""
	I1213 16:16:18.573881 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.573889 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:18.573896 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:18.573960 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:18.604140 1542350 cri.go:89] found id: ""
	I1213 16:16:18.604167 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.604188 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:18.604194 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:18.604264 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:18.637649 1542350 cri.go:89] found id: ""
	I1213 16:16:18.637677 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.637686 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:18.637692 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:18.637752 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:18.668019 1542350 cri.go:89] found id: ""
	I1213 16:16:18.668045 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.668053 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:18.668059 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:18.668120 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:18.694456 1542350 cri.go:89] found id: ""
	I1213 16:16:18.694482 1542350 logs.go:282] 0 containers: []
	W1213 16:16:18.694493 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:18.694503 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:18.694515 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:18.722967 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:18.722995 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:18.780808 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:18.780844 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:18.797393 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:18.797421 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:18.866061 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:18.858113    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.858623    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.860137    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.860610    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.862072    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:18.858113    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.858623    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.860137    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.860610    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:18.862072    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:18.866083 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:18.866096 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:21.391436 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:21.403266 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:21.403363 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:21.429372 1542350 cri.go:89] found id: ""
	I1213 16:16:21.429405 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.429415 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:21.429420 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:21.429479 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:21.454218 1542350 cri.go:89] found id: ""
	I1213 16:16:21.454287 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.454311 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:21.454329 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:21.454420 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:21.478016 1542350 cri.go:89] found id: ""
	I1213 16:16:21.478041 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.478049 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:21.478055 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:21.478112 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:21.504574 1542350 cri.go:89] found id: ""
	I1213 16:16:21.504612 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.504622 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:21.504629 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:21.504692 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:21.531727 1542350 cri.go:89] found id: ""
	I1213 16:16:21.531761 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.531770 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:21.531777 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:21.531836 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:21.556964 1542350 cri.go:89] found id: ""
	I1213 16:16:21.556999 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.557010 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:21.557018 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:21.557077 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:21.592445 1542350 cri.go:89] found id: ""
	I1213 16:16:21.592509 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.592533 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:21.592550 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:21.592645 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:21.620898 1542350 cri.go:89] found id: ""
	I1213 16:16:21.620920 1542350 logs.go:282] 0 containers: []
	W1213 16:16:21.620928 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:21.620937 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:21.620949 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:21.682810 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:21.682846 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:21.699275 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:21.699375 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:21.766336 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:21.758580    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.759107    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.760601    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.761028    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.762478    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:21.758580    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.759107    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.760601    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.761028    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:21.762478    8740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:21.766397 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:21.766426 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:21.791266 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:21.791300 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:24.319481 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:24.330216 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:24.330310 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:24.369003 1542350 cri.go:89] found id: ""
	I1213 16:16:24.369033 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.369041 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:24.369047 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:24.369106 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:24.396473 1542350 cri.go:89] found id: ""
	I1213 16:16:24.396502 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.396511 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:24.396516 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:24.396580 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:24.436915 1542350 cri.go:89] found id: ""
	I1213 16:16:24.436939 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.436948 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:24.436953 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:24.437013 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:24.465118 1542350 cri.go:89] found id: ""
	I1213 16:16:24.465139 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.465147 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:24.465153 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:24.465211 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:24.490097 1542350 cri.go:89] found id: ""
	I1213 16:16:24.490121 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.490130 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:24.490136 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:24.490196 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:24.520031 1542350 cri.go:89] found id: ""
	I1213 16:16:24.520096 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.520120 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:24.520141 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:24.520214 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:24.545891 1542350 cri.go:89] found id: ""
	I1213 16:16:24.545919 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.545928 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:24.545933 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:24.546014 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:24.574276 1542350 cri.go:89] found id: ""
	I1213 16:16:24.574313 1542350 logs.go:282] 0 containers: []
	W1213 16:16:24.574323 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:24.574353 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:24.574387 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:24.611068 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:24.611145 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:24.677764 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:24.677808 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:24.696759 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:24.696802 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:24.773564 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:24.765018    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.765700    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.767375    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.767981    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.769749    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:24.765018    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.765700    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.767375    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.767981    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:24.769749    8864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:24.773586 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:24.773598 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:27.299826 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:27.310825 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:27.310902 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:27.341771 1542350 cri.go:89] found id: ""
	I1213 16:16:27.341794 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.341803 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:27.341810 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:27.341876 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:27.369884 1542350 cri.go:89] found id: ""
	I1213 16:16:27.369908 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.369917 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:27.369923 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:27.369988 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:27.402575 1542350 cri.go:89] found id: ""
	I1213 16:16:27.402598 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.402606 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:27.402612 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:27.402680 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:27.429116 1542350 cri.go:89] found id: ""
	I1213 16:16:27.429157 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.429169 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:27.429176 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:27.429245 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:27.456147 1542350 cri.go:89] found id: ""
	I1213 16:16:27.456174 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.456183 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:27.456191 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:27.456254 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:27.481262 1542350 cri.go:89] found id: ""
	I1213 16:16:27.481288 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.481297 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:27.481304 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:27.481370 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:27.507140 1542350 cri.go:89] found id: ""
	I1213 16:16:27.507169 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.507179 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:27.507185 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:27.507269 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:27.532060 1542350 cri.go:89] found id: ""
	I1213 16:16:27.532139 1542350 logs.go:282] 0 containers: []
	W1213 16:16:27.532162 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:27.532180 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:27.532193 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:27.588083 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:27.588123 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:27.605875 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:27.605906 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:27.677799 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:27.667405    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.668242    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.669729    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.670202    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.673720    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:27.667405    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.668242    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.669729    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.670202    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:27.673720    8964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:27.677822 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:27.677834 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:27.703668 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:27.703704 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:30.232616 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:30.244334 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:30.244408 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:30.269730 1542350 cri.go:89] found id: ""
	I1213 16:16:30.269757 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.269765 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:30.269771 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:30.269830 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:30.296665 1542350 cri.go:89] found id: ""
	I1213 16:16:30.296693 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.296702 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:30.296709 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:30.296832 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:30.322172 1542350 cri.go:89] found id: ""
	I1213 16:16:30.322251 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.322276 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:30.322296 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:30.322405 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:30.364083 1542350 cri.go:89] found id: ""
	I1213 16:16:30.364113 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.364125 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:30.364138 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:30.364206 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:30.405727 1542350 cri.go:89] found id: ""
	I1213 16:16:30.405751 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.405759 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:30.405765 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:30.405825 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:30.432819 1542350 cri.go:89] found id: ""
	I1213 16:16:30.432846 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.432855 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:30.432862 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:30.432921 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:30.458202 1542350 cri.go:89] found id: ""
	I1213 16:16:30.458228 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.458237 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:30.458243 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:30.458310 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:30.482950 1542350 cri.go:89] found id: ""
	I1213 16:16:30.482977 1542350 logs.go:282] 0 containers: []
	W1213 16:16:30.482987 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:30.482996 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:30.483008 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:30.507886 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:30.507921 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:30.538090 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:30.538159 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:30.593644 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:30.593729 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:30.610246 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:30.610272 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:30.684359 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:30.676539    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.677025    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.678556    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.678874    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.680436    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:30.676539    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.677025    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.678556    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.678874    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:30.680436    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:33.184602 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:33.195455 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:33.195556 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:33.225437 1542350 cri.go:89] found id: ""
	I1213 16:16:33.225459 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.225468 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:33.225474 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:33.225541 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:33.250024 1542350 cri.go:89] found id: ""
	I1213 16:16:33.250089 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.250113 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:33.250131 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:33.250218 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:33.275721 1542350 cri.go:89] found id: ""
	I1213 16:16:33.275747 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.275755 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:33.275762 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:33.275823 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:33.300346 1542350 cri.go:89] found id: ""
	I1213 16:16:33.300368 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.300377 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:33.300383 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:33.300442 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:33.324866 1542350 cri.go:89] found id: ""
	I1213 16:16:33.324889 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.324897 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:33.324904 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:33.324963 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:33.354142 1542350 cri.go:89] found id: ""
	I1213 16:16:33.354216 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.354239 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:33.354257 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:33.354347 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:33.388195 1542350 cri.go:89] found id: ""
	I1213 16:16:33.388216 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.388224 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:33.388230 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:33.388286 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:33.416283 1542350 cri.go:89] found id: ""
	I1213 16:16:33.416306 1542350 logs.go:282] 0 containers: []
	W1213 16:16:33.416314 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:33.416325 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:33.416337 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:33.432175 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:33.432206 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:33.499040 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:33.490874    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.491494    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.493033    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.493510    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.495051    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:33.490874    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.491494    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.493033    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.493510    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:33.495051    9184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:33.499062 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:33.499074 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:33.524925 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:33.524958 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:33.554998 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:33.555026 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:36.110953 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:36.121861 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:36.121930 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:36.146369 1542350 cri.go:89] found id: ""
	I1213 16:16:36.146429 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.146450 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:36.146476 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:36.146557 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:36.171595 1542350 cri.go:89] found id: ""
	I1213 16:16:36.171617 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.171625 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:36.171631 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:36.171693 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:36.196869 1542350 cri.go:89] found id: ""
	I1213 16:16:36.196891 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.196900 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:36.196906 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:36.196963 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:36.221290 1542350 cri.go:89] found id: ""
	I1213 16:16:36.221317 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.221326 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:36.221338 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:36.221400 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:36.246254 1542350 cri.go:89] found id: ""
	I1213 16:16:36.246280 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.246289 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:36.246294 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:36.246352 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:36.276463 1542350 cri.go:89] found id: ""
	I1213 16:16:36.276486 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.276494 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:36.276500 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:36.276565 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:36.302414 1542350 cri.go:89] found id: ""
	I1213 16:16:36.302446 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.302454 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:36.302460 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:36.302530 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:36.327676 1542350 cri.go:89] found id: ""
	I1213 16:16:36.327753 1542350 logs.go:282] 0 containers: []
	W1213 16:16:36.327770 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:36.327781 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:36.327793 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:36.347589 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:36.347658 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:36.422910 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:36.414535    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.415436    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.417255    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.417652    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.419100    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:36.414535    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.415436    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.417255    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.417652    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:36.419100    9295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:36.422940 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:36.422968 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:36.449077 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:36.449114 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:36.476904 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:36.476935 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:39.032927 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:39.043398 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:39.043466 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:39.068941 1542350 cri.go:89] found id: ""
	I1213 16:16:39.068968 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.068977 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:39.068983 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:39.069040 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:39.094525 1542350 cri.go:89] found id: ""
	I1213 16:16:39.094548 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.094557 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:39.094564 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:39.094626 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:39.118854 1542350 cri.go:89] found id: ""
	I1213 16:16:39.118875 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.118884 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:39.118890 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:39.118946 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:39.147615 1542350 cri.go:89] found id: ""
	I1213 16:16:39.147642 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.147651 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:39.147657 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:39.147719 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:39.173015 1542350 cri.go:89] found id: ""
	I1213 16:16:39.173038 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.173047 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:39.173053 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:39.173121 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:39.198427 1542350 cri.go:89] found id: ""
	I1213 16:16:39.198453 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.198462 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:39.198468 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:39.198525 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:39.223491 1542350 cri.go:89] found id: ""
	I1213 16:16:39.223514 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.223522 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:39.223528 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:39.223587 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:39.254117 1542350 cri.go:89] found id: ""
	I1213 16:16:39.254148 1542350 logs.go:282] 0 containers: []
	W1213 16:16:39.254157 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:39.254166 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:39.254178 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:39.313667 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:39.313706 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:39.331137 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:39.331215 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:39.414971 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:39.406302    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.407133    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.408880    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.409415    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.411052    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:39.406302    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.407133    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.408880    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.409415    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:39.411052    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:39.414990 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:39.415003 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:39.440561 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:39.440604 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:41.973087 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:41.983385 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:41.983456 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:42.010547 1542350 cri.go:89] found id: ""
	I1213 16:16:42.010644 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.010658 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:42.010666 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:42.010780 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:42.041355 1542350 cri.go:89] found id: ""
	I1213 16:16:42.041379 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.041388 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:42.041394 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:42.041462 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:42.074781 1542350 cri.go:89] found id: ""
	I1213 16:16:42.074808 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.074818 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:42.074825 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:42.074895 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:42.105943 1542350 cri.go:89] found id: ""
	I1213 16:16:42.105972 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.105980 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:42.105987 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:42.106062 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:42.144036 1542350 cri.go:89] found id: ""
	I1213 16:16:42.144062 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.144070 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:42.144077 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:42.144144 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:42.177438 1542350 cri.go:89] found id: ""
	I1213 16:16:42.177464 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.177474 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:42.177482 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:42.177555 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:42.209616 1542350 cri.go:89] found id: ""
	I1213 16:16:42.209644 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.209653 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:42.209662 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:42.209730 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:42.240251 1542350 cri.go:89] found id: ""
	I1213 16:16:42.240283 1542350 logs.go:282] 0 containers: []
	W1213 16:16:42.240293 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:42.240303 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:42.240317 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:42.274974 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:42.275008 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:42.333409 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:42.333488 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:42.353909 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:42.353998 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:42.431547 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:42.422852    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.423662    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.425409    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.425733    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.427251    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:42.422852    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.423662    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.425409    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.425733    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:42.427251    9541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:42.431570 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:42.431582 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:44.957982 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:44.968708 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:44.968778 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:44.998179 1542350 cri.go:89] found id: ""
	I1213 16:16:44.998205 1542350 logs.go:282] 0 containers: []
	W1213 16:16:44.998214 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:44.998220 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:44.998281 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:45.055672 1542350 cri.go:89] found id: ""
	I1213 16:16:45.055695 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.055705 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:45.055712 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:45.055785 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:45.112504 1542350 cri.go:89] found id: ""
	I1213 16:16:45.112598 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.112625 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:45.112646 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:45.112821 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:45.148966 1542350 cri.go:89] found id: ""
	I1213 16:16:45.148993 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.149002 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:45.149008 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:45.149081 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:45.215276 1542350 cri.go:89] found id: ""
	I1213 16:16:45.215383 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.215547 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:45.215573 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:45.215685 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:45.266343 1542350 cri.go:89] found id: ""
	I1213 16:16:45.266422 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.266448 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:45.266469 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:45.266569 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:45.311801 1542350 cri.go:89] found id: ""
	I1213 16:16:45.311877 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.311905 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:45.311925 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:45.312039 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:45.345856 1542350 cri.go:89] found id: ""
	I1213 16:16:45.345884 1542350 logs.go:282] 0 containers: []
	W1213 16:16:45.345894 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:45.345904 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:45.345928 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:45.416309 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:45.416392 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:45.433509 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:45.433593 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:45.504820 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:45.495815    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.496673    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.498435    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.498745    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.500220    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:45.495815    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.496673    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.498435    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.498745    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:45.500220    9643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:45.504841 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:45.504855 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:45.530797 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:45.530836 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:48.061294 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:48.072582 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:48.072653 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:48.101139 1542350 cri.go:89] found id: ""
	I1213 16:16:48.101164 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.101173 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:48.101179 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:48.101250 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:48.127077 1542350 cri.go:89] found id: ""
	I1213 16:16:48.127100 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.127109 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:48.127115 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:48.127179 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:48.152708 1542350 cri.go:89] found id: ""
	I1213 16:16:48.152731 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.152740 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:48.152746 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:48.152806 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:48.183194 1542350 cri.go:89] found id: ""
	I1213 16:16:48.183220 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.183228 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:48.183235 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:48.183295 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:48.208544 1542350 cri.go:89] found id: ""
	I1213 16:16:48.208612 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.208638 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:48.208658 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:48.208773 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:48.234599 1542350 cri.go:89] found id: ""
	I1213 16:16:48.234633 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.234642 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:48.234667 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:48.234745 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:48.259586 1542350 cri.go:89] found id: ""
	I1213 16:16:48.259614 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.259623 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:48.259629 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:48.259712 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:48.283477 1542350 cri.go:89] found id: ""
	I1213 16:16:48.283499 1542350 logs.go:282] 0 containers: []
	W1213 16:16:48.283509 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:48.283542 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:48.283561 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:48.339116 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:48.339190 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:48.360686 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:48.360767 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:48.433619 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:48.425212    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.425811    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.427513    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.427900    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.429452    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:48.425212    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.425811    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.427513    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.427900    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:48.429452    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:48.433643 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:48.433655 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:48.458793 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:48.458837 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:50.988521 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:50.999862 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:50.999930 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:51.029019 1542350 cri.go:89] found id: ""
	I1213 16:16:51.029045 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.029054 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:51.029060 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:51.029132 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:51.058195 1542350 cri.go:89] found id: ""
	I1213 16:16:51.058222 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.058231 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:51.058237 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:51.058297 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:51.083486 1542350 cri.go:89] found id: ""
	I1213 16:16:51.083512 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.083521 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:51.083527 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:51.083589 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:51.108698 1542350 cri.go:89] found id: ""
	I1213 16:16:51.108723 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.108733 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:51.108739 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:51.108801 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:51.133979 1542350 cri.go:89] found id: ""
	I1213 16:16:51.134003 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.134011 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:51.134017 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:51.134074 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:51.161527 1542350 cri.go:89] found id: ""
	I1213 16:16:51.161552 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.161562 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:51.161568 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:51.161627 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:51.186814 1542350 cri.go:89] found id: ""
	I1213 16:16:51.186841 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.186850 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:51.186856 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:51.186916 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:51.216180 1542350 cri.go:89] found id: ""
	I1213 16:16:51.216212 1542350 logs.go:282] 0 containers: []
	W1213 16:16:51.216221 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:51.216230 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:51.216245 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:51.273877 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:51.273919 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:51.291469 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:51.291502 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:51.365379 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:51.356884    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.357610    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.359231    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.359765    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.361313    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:51.356884    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.357610    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.359231    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.359765    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:51.361313    9868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:51.365447 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:51.365471 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:51.393925 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:51.393997 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:53.927124 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:53.937787 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:53.937865 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:53.965198 1542350 cri.go:89] found id: ""
	I1213 16:16:53.965222 1542350 logs.go:282] 0 containers: []
	W1213 16:16:53.965230 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:53.965236 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:53.965295 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:53.990127 1542350 cri.go:89] found id: ""
	I1213 16:16:53.990153 1542350 logs.go:282] 0 containers: []
	W1213 16:16:53.990162 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:53.990168 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:53.990227 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:54.017573 1542350 cri.go:89] found id: ""
	I1213 16:16:54.017600 1542350 logs.go:282] 0 containers: []
	W1213 16:16:54.017610 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:54.017627 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:54.017691 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:54.042201 1542350 cri.go:89] found id: ""
	I1213 16:16:54.042223 1542350 logs.go:282] 0 containers: []
	W1213 16:16:54.042232 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:54.042239 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:54.042297 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:54.069040 1542350 cri.go:89] found id: ""
	I1213 16:16:54.069064 1542350 logs.go:282] 0 containers: []
	W1213 16:16:54.069072 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:54.069079 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:54.069139 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:54.094593 1542350 cri.go:89] found id: ""
	I1213 16:16:54.094614 1542350 logs.go:282] 0 containers: []
	W1213 16:16:54.094624 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:54.094630 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:54.094692 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:54.118976 1542350 cri.go:89] found id: ""
	I1213 16:16:54.119047 1542350 logs.go:282] 0 containers: []
	W1213 16:16:54.119070 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:54.119088 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:54.119162 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:54.145323 1542350 cri.go:89] found id: ""
	I1213 16:16:54.145346 1542350 logs.go:282] 0 containers: []
	W1213 16:16:54.145355 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:54.145364 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:54.145375 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:54.170838 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:54.170873 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:54.198725 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:54.198752 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:54.253610 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:54.253646 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:54.272399 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:54.272428 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:54.360945 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:54.351253    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.352548    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.354388    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.355099    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.356700    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:54.351253    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.352548    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.354388    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.355099    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:54.356700    9995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:56.861910 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:56.873998 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:56.874110 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:56.904398 1542350 cri.go:89] found id: ""
	I1213 16:16:56.904423 1542350 logs.go:282] 0 containers: []
	W1213 16:16:56.904432 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:56.904438 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:56.904498 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:56.928756 1542350 cri.go:89] found id: ""
	I1213 16:16:56.928783 1542350 logs.go:282] 0 containers: []
	W1213 16:16:56.928792 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:56.928798 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:56.928856 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:56.952449 1542350 cri.go:89] found id: ""
	I1213 16:16:56.952473 1542350 logs.go:282] 0 containers: []
	W1213 16:16:56.952481 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:56.952487 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:56.952544 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:56.976949 1542350 cri.go:89] found id: ""
	I1213 16:16:56.976973 1542350 logs.go:282] 0 containers: []
	W1213 16:16:56.976981 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:56.976988 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:56.977074 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:57.001996 1542350 cri.go:89] found id: ""
	I1213 16:16:57.002023 1542350 logs.go:282] 0 containers: []
	W1213 16:16:57.002032 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:57.002039 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:57.002107 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:57.033494 1542350 cri.go:89] found id: ""
	I1213 16:16:57.033519 1542350 logs.go:282] 0 containers: []
	W1213 16:16:57.033527 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:57.033533 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:57.033590 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:16:57.057055 1542350 cri.go:89] found id: ""
	I1213 16:16:57.057082 1542350 logs.go:282] 0 containers: []
	W1213 16:16:57.057090 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:16:57.057096 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:16:57.057153 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:16:57.086023 1542350 cri.go:89] found id: ""
	I1213 16:16:57.086047 1542350 logs.go:282] 0 containers: []
	W1213 16:16:57.086057 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:16:57.086066 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:16:57.086078 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:16:57.140604 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:16:57.140639 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:16:57.156471 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:16:57.156501 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:16:57.226365 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:16:57.217689   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.218500   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.220107   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.220657   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.222574   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:16:57.217689   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.218500   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.220107   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.220657   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:16:57.222574   10094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:16:57.226409 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:16:57.226425 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:16:57.251875 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:16:57.251911 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:16:59.781524 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:16:59.792544 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:16:59.792620 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:16:59.817081 1542350 cri.go:89] found id: ""
	I1213 16:16:59.817108 1542350 logs.go:282] 0 containers: []
	W1213 16:16:59.817123 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:16:59.817130 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:16:59.817197 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:16:59.854425 1542350 cri.go:89] found id: ""
	I1213 16:16:59.854453 1542350 logs.go:282] 0 containers: []
	W1213 16:16:59.854463 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:16:59.854469 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:16:59.854529 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:16:59.891724 1542350 cri.go:89] found id: ""
	I1213 16:16:59.891750 1542350 logs.go:282] 0 containers: []
	W1213 16:16:59.891759 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:16:59.891766 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:16:59.891826 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:16:59.921656 1542350 cri.go:89] found id: ""
	I1213 16:16:59.921682 1542350 logs.go:282] 0 containers: []
	W1213 16:16:59.921691 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:16:59.921697 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:16:59.921757 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:16:59.946905 1542350 cri.go:89] found id: ""
	I1213 16:16:59.946930 1542350 logs.go:282] 0 containers: []
	W1213 16:16:59.946943 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:16:59.946949 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:16:59.947011 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:16:59.974061 1542350 cri.go:89] found id: ""
	I1213 16:16:59.974087 1542350 logs.go:282] 0 containers: []
	W1213 16:16:59.974096 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:16:59.974103 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:16:59.974181 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:00.003912 1542350 cri.go:89] found id: ""
	I1213 16:17:00.003945 1542350 logs.go:282] 0 containers: []
	W1213 16:17:00.003955 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:00.003962 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:00.004041 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:00.129167 1542350 cri.go:89] found id: ""
	I1213 16:17:00.129242 1542350 logs.go:282] 0 containers: []
	W1213 16:17:00.129267 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:00.129291 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:00.129321 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:00.325276 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:00.316397   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.317316   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.318748   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.319511   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.320676   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:00.316397   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.317316   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.318748   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.319511   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:00.320676   10200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:00.325303 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:00.325317 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:00.357630 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:00.357684 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:00.417887 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:00.417929 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:00.512817 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:00.512861 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:03.034231 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:03.045928 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:03.046041 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:03.073150 1542350 cri.go:89] found id: ""
	I1213 16:17:03.073178 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.073187 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:03.073194 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:03.073257 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:03.100010 1542350 cri.go:89] found id: ""
	I1213 16:17:03.100036 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.100046 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:03.100052 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:03.100118 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:03.126901 1542350 cri.go:89] found id: ""
	I1213 16:17:03.126929 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.126938 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:03.126944 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:03.127007 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:03.158512 1542350 cri.go:89] found id: ""
	I1213 16:17:03.158538 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.158547 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:03.158554 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:03.158623 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:03.186730 1542350 cri.go:89] found id: ""
	I1213 16:17:03.186757 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.186766 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:03.186773 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:03.186843 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:03.213877 1542350 cri.go:89] found id: ""
	I1213 16:17:03.213913 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.213922 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:03.213929 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:03.214000 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:03.244284 1542350 cri.go:89] found id: ""
	I1213 16:17:03.244360 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.244382 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:03.244401 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:03.244496 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:03.272102 1542350 cri.go:89] found id: ""
	I1213 16:17:03.272193 1542350 logs.go:282] 0 containers: []
	W1213 16:17:03.272210 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:03.272221 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:03.272234 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:03.330001 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:03.330036 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:03.347681 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:03.347716 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:03.430544 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:03.421427   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.422228   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.424046   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.424538   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.426050   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:03.421427   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.422228   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.424046   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.424538   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:03.426050   10324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:03.430566 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:03.430581 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:03.457512 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:03.457552 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:05.988326 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:06.000598 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:06.000678 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:06.036782 1542350 cri.go:89] found id: ""
	I1213 16:17:06.036859 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.036876 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:06.036891 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:06.036960 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:06.066595 1542350 cri.go:89] found id: ""
	I1213 16:17:06.066623 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.066633 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:06.066640 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:06.066705 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:06.095017 1542350 cri.go:89] found id: ""
	I1213 16:17:06.095047 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.095057 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:06.095064 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:06.095146 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:06.123113 1542350 cri.go:89] found id: ""
	I1213 16:17:06.123140 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.123150 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:06.123156 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:06.123223 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:06.150821 1542350 cri.go:89] found id: ""
	I1213 16:17:06.150847 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.150856 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:06.150862 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:06.150925 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:06.176578 1542350 cri.go:89] found id: ""
	I1213 16:17:06.176608 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.176616 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:06.176623 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:06.176690 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:06.207351 1542350 cri.go:89] found id: ""
	I1213 16:17:06.207387 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.207397 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:06.207404 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:06.207468 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:06.233849 1542350 cri.go:89] found id: ""
	I1213 16:17:06.233872 1542350 logs.go:282] 0 containers: []
	W1213 16:17:06.233881 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:06.233890 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:06.233907 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:06.250685 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:06.250716 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:06.319519 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:06.310314   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.310885   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.312626   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.313247   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.315150   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:06.310314   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.310885   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.312626   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.313247   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:06.315150   10430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:06.319544 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:06.319566 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:06.346128 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:06.346163 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:06.386358 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:06.386439 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:08.950033 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:08.960761 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:08.960908 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:08.984689 1542350 cri.go:89] found id: ""
	I1213 16:17:08.984727 1542350 logs.go:282] 0 containers: []
	W1213 16:17:08.984737 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:08.984760 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:08.984839 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:09.014786 1542350 cri.go:89] found id: ""
	I1213 16:17:09.014811 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.014820 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:09.014826 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:09.014890 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:09.044222 1542350 cri.go:89] found id: ""
	I1213 16:17:09.044257 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.044267 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:09.044276 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:09.044344 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:09.077612 1542350 cri.go:89] found id: ""
	I1213 16:17:09.077685 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.077708 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:09.077726 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:09.077815 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:09.105512 1542350 cri.go:89] found id: ""
	I1213 16:17:09.105535 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.105545 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:09.105551 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:09.105617 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:09.129780 1542350 cri.go:89] found id: ""
	I1213 16:17:09.129803 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.129811 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:09.129817 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:09.129878 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:09.154967 1542350 cri.go:89] found id: ""
	I1213 16:17:09.154993 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.155002 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:09.155009 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:09.155076 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:09.179699 1542350 cri.go:89] found id: ""
	I1213 16:17:09.179763 1542350 logs.go:282] 0 containers: []
	W1213 16:17:09.179789 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:09.179806 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:09.179817 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:09.235549 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:09.235580 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:09.251403 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:09.251431 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:09.319531 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:09.311333   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.311986   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.313546   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.314046   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.315495   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:09.311333   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.311986   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.313546   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.314046   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:09.315495   10545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:09.319549 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:09.319561 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:09.346608 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:09.346650 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:11.878089 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:11.889358 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:11.889432 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:11.915293 1542350 cri.go:89] found id: ""
	I1213 16:17:11.915330 1542350 logs.go:282] 0 containers: []
	W1213 16:17:11.915339 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:11.915346 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:11.915408 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:11.945256 1542350 cri.go:89] found id: ""
	I1213 16:17:11.945334 1542350 logs.go:282] 0 containers: []
	W1213 16:17:11.945359 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:11.945374 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:11.945452 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:11.969767 1542350 cri.go:89] found id: ""
	I1213 16:17:11.969794 1542350 logs.go:282] 0 containers: []
	W1213 16:17:11.969803 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:11.969809 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:11.969871 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:11.993969 1542350 cri.go:89] found id: ""
	I1213 16:17:11.993996 1542350 logs.go:282] 0 containers: []
	W1213 16:17:11.994005 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:11.994011 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:11.994089 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:12.029493 1542350 cri.go:89] found id: ""
	I1213 16:17:12.029521 1542350 logs.go:282] 0 containers: []
	W1213 16:17:12.029531 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:12.029543 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:12.029608 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:12.059180 1542350 cri.go:89] found id: ""
	I1213 16:17:12.059208 1542350 logs.go:282] 0 containers: []
	W1213 16:17:12.059217 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:12.059223 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:12.059283 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:12.087232 1542350 cri.go:89] found id: ""
	I1213 16:17:12.087261 1542350 logs.go:282] 0 containers: []
	W1213 16:17:12.087270 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:12.087276 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:12.087371 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:12.112813 1542350 cri.go:89] found id: ""
	I1213 16:17:12.112835 1542350 logs.go:282] 0 containers: []
	W1213 16:17:12.112844 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:12.112853 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:12.112864 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:12.138376 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:12.138408 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:12.166357 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:12.166387 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:12.222375 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:12.222410 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:12.239215 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:12.239247 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:12.308445 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:12.300806   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.301332   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.302968   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.303390   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.304404   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:12.300806   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.301332   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.302968   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.303390   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:12.304404   10669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:14.808692 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:14.819373 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:14.819444 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:14.852674 1542350 cri.go:89] found id: ""
	I1213 16:17:14.852703 1542350 logs.go:282] 0 containers: []
	W1213 16:17:14.852712 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:14.852728 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:14.852788 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:14.883668 1542350 cri.go:89] found id: ""
	I1213 16:17:14.883695 1542350 logs.go:282] 0 containers: []
	W1213 16:17:14.883704 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:14.883710 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:14.883767 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:14.911607 1542350 cri.go:89] found id: ""
	I1213 16:17:14.911630 1542350 logs.go:282] 0 containers: []
	W1213 16:17:14.911638 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:14.911644 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:14.911706 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:14.936933 1542350 cri.go:89] found id: ""
	I1213 16:17:14.936960 1542350 logs.go:282] 0 containers: []
	W1213 16:17:14.936970 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:14.936977 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:14.937035 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:14.962547 1542350 cri.go:89] found id: ""
	I1213 16:17:14.962570 1542350 logs.go:282] 0 containers: []
	W1213 16:17:14.962580 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:14.962586 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:14.962689 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:14.986795 1542350 cri.go:89] found id: ""
	I1213 16:17:14.986820 1542350 logs.go:282] 0 containers: []
	W1213 16:17:14.986836 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:14.986843 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:14.986903 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:15.033107 1542350 cri.go:89] found id: ""
	I1213 16:17:15.033185 1542350 logs.go:282] 0 containers: []
	W1213 16:17:15.033224 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:15.033257 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:15.033365 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:15.061981 1542350 cri.go:89] found id: ""
	I1213 16:17:15.062060 1542350 logs.go:282] 0 containers: []
	W1213 16:17:15.062093 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:15.062116 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:15.062143 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:15.118734 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:15.118772 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:15.135655 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:15.135685 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:15.203637 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:15.195493   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.196273   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.197861   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.198155   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.199758   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:15.195493   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.196273   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.197861   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.198155   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:15.199758   10769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:15.203658 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:15.203670 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:15.229691 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:15.229730 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:17.757141 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:17.767810 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:17.767883 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:17.795906 1542350 cri.go:89] found id: ""
	I1213 16:17:17.795930 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.795939 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:17.795945 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:17.796011 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:17.820499 1542350 cri.go:89] found id: ""
	I1213 16:17:17.820525 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.820534 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:17.820540 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:17.820597 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:17.852893 1542350 cri.go:89] found id: ""
	I1213 16:17:17.852922 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.852931 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:17.852936 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:17.852998 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:17.882522 1542350 cri.go:89] found id: ""
	I1213 16:17:17.882550 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.882559 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:17.882567 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:17.882625 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:17.910091 1542350 cri.go:89] found id: ""
	I1213 16:17:17.910119 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.910128 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:17.910133 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:17.910194 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:17.934842 1542350 cri.go:89] found id: ""
	I1213 16:17:17.934877 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.934886 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:17.934892 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:17.934957 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:17.959436 1542350 cri.go:89] found id: ""
	I1213 16:17:17.959470 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.959480 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:17.959491 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:17.959563 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:17.984392 1542350 cri.go:89] found id: ""
	I1213 16:17:17.984422 1542350 logs.go:282] 0 containers: []
	W1213 16:17:17.984431 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:17.984440 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:17.984452 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:18.039527 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:18.039566 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:18.055611 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:18.055637 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:18.119895 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:18.111397   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.112071   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.113661   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.114180   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.115834   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:18.111397   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.112071   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.113661   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.114180   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:18.115834   10882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:18.119920 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:18.119935 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:18.145247 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:18.145282 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:20.679491 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:20.690101 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:20.690172 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:20.715727 1542350 cri.go:89] found id: ""
	I1213 16:17:20.715753 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.715770 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:20.715780 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:20.715849 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:20.743470 1542350 cri.go:89] found id: ""
	I1213 16:17:20.743496 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.743504 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:20.743511 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:20.743570 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:20.768457 1542350 cri.go:89] found id: ""
	I1213 16:17:20.768480 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.768496 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:20.768503 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:20.768561 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:20.792618 1542350 cri.go:89] found id: ""
	I1213 16:17:20.792644 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.792653 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:20.792660 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:20.792718 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:20.817055 1542350 cri.go:89] found id: ""
	I1213 16:17:20.817077 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.817087 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:20.817093 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:20.817155 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:20.847328 1542350 cri.go:89] found id: ""
	I1213 16:17:20.847351 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.847360 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:20.847366 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:20.847428 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:20.885859 1542350 cri.go:89] found id: ""
	I1213 16:17:20.885882 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.885891 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:20.885898 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:20.885956 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:20.915753 1542350 cri.go:89] found id: ""
	I1213 16:17:20.915784 1542350 logs.go:282] 0 containers: []
	W1213 16:17:20.915794 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:20.915803 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:20.915815 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:20.970894 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:20.970934 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:20.986885 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:20.986910 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:21.055027 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:21.046462   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.046998   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.048753   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.049334   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.051047   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:21.046462   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.046998   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.048753   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.049334   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:21.051047   10995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:21.055049 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:21.055062 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:21.079833 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:21.079866 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:23.608166 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:23.619347 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:23.619414 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:23.649699 1542350 cri.go:89] found id: ""
	I1213 16:17:23.649721 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.649729 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:23.649736 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:23.649795 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:23.675224 1542350 cri.go:89] found id: ""
	I1213 16:17:23.675246 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.675255 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:23.675261 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:23.675349 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:23.700895 1542350 cri.go:89] found id: ""
	I1213 16:17:23.700918 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.700927 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:23.700933 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:23.700996 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:23.729110 1542350 cri.go:89] found id: ""
	I1213 16:17:23.729176 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.729191 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:23.729198 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:23.729257 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:23.753661 1542350 cri.go:89] found id: ""
	I1213 16:17:23.753688 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.753697 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:23.753703 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:23.753774 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:23.778169 1542350 cri.go:89] found id: ""
	I1213 16:17:23.778217 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.778227 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:23.778234 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:23.778301 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:23.802589 1542350 cri.go:89] found id: ""
	I1213 16:17:23.802622 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.802631 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:23.802637 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:23.802708 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:23.832514 1542350 cri.go:89] found id: ""
	I1213 16:17:23.832548 1542350 logs.go:282] 0 containers: []
	W1213 16:17:23.832558 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:23.832569 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:23.832582 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:23.917876 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:23.909157   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.909608   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.910836   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.911543   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.913354   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:23.909157   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.909608   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.910836   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.911543   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:23.913354   11101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:23.917899 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:23.917918 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:23.943509 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:23.943548 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:23.971452 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:23.971478 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:24.027358 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:24.027396 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:26.545810 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:26.556391 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:26.556463 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:26.580187 1542350 cri.go:89] found id: ""
	I1213 16:17:26.580210 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.580219 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:26.580239 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:26.580300 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:26.608397 1542350 cri.go:89] found id: ""
	I1213 16:17:26.608420 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.608429 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:26.608435 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:26.608496 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:26.636638 1542350 cri.go:89] found id: ""
	I1213 16:17:26.636661 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.636669 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:26.636675 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:26.636734 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:26.665248 1542350 cri.go:89] found id: ""
	I1213 16:17:26.665274 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.665283 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:26.665289 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:26.665365 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:26.695808 1542350 cri.go:89] found id: ""
	I1213 16:17:26.695835 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.695854 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:26.695861 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:26.695918 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:26.721653 1542350 cri.go:89] found id: ""
	I1213 16:17:26.721678 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.721687 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:26.721693 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:26.721751 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:26.750218 1542350 cri.go:89] found id: ""
	I1213 16:17:26.750241 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.750250 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:26.750256 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:26.750313 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:26.777036 1542350 cri.go:89] found id: ""
	I1213 16:17:26.777059 1542350 logs.go:282] 0 containers: []
	W1213 16:17:26.777068 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:26.777077 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:26.777088 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:26.833887 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:26.833929 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:26.851275 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:26.851303 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:26.934951 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:26.926741   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.927535   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.929160   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.929458   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.930955   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:26.926741   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.927535   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.929160   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.929458   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:26.930955   11221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:26.934973 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:26.934985 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:26.960388 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:26.960424 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:29.488577 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:29.499475 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:29.499551 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:29.524176 1542350 cri.go:89] found id: ""
	I1213 16:17:29.524202 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.524212 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:29.524219 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:29.524281 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:29.558368 1542350 cri.go:89] found id: ""
	I1213 16:17:29.558393 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.558408 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:29.558415 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:29.558504 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:29.589170 1542350 cri.go:89] found id: ""
	I1213 16:17:29.589197 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.589206 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:29.589212 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:29.589273 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:29.621623 1542350 cri.go:89] found id: ""
	I1213 16:17:29.621697 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.621722 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:29.621741 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:29.621828 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:29.651459 1542350 cri.go:89] found id: ""
	I1213 16:17:29.651534 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.651557 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:29.651584 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:29.651712 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:29.676637 1542350 cri.go:89] found id: ""
	I1213 16:17:29.676663 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.676673 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:29.676679 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:29.676752 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:29.701821 1542350 cri.go:89] found id: ""
	I1213 16:17:29.701845 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.701855 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:29.701861 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:29.701920 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:29.726528 1542350 cri.go:89] found id: ""
	I1213 16:17:29.726555 1542350 logs.go:282] 0 containers: []
	W1213 16:17:29.726564 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:29.726574 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:29.726585 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:29.781999 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:29.782035 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:29.798088 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:29.798116 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:29.881323 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:29.870998   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.871883   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.873746   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.874663   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.876377   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:29.870998   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.871883   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.873746   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.874663   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:29.876377   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:29.881348 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:29.881361 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:29.911425 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:29.911464 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:32.442588 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:32.453594 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:32.453664 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:32.479865 1542350 cri.go:89] found id: ""
	I1213 16:17:32.479893 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.479902 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:32.479909 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:32.479975 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:32.505131 1542350 cri.go:89] found id: ""
	I1213 16:17:32.505159 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.505168 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:32.505175 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:32.505239 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:32.529697 1542350 cri.go:89] found id: ""
	I1213 16:17:32.529723 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.529732 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:32.529738 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:32.529796 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:32.554812 1542350 cri.go:89] found id: ""
	I1213 16:17:32.554834 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.554850 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:32.554856 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:32.554915 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:32.582244 1542350 cri.go:89] found id: ""
	I1213 16:17:32.582270 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.582279 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:32.582286 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:32.582347 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:32.613711 1542350 cri.go:89] found id: ""
	I1213 16:17:32.613738 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.613747 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:32.613754 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:32.613818 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:32.642070 1542350 cri.go:89] found id: ""
	I1213 16:17:32.642097 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.642106 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:32.642112 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:32.642168 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:32.667382 1542350 cri.go:89] found id: ""
	I1213 16:17:32.667406 1542350 logs.go:282] 0 containers: []
	W1213 16:17:32.667415 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:32.667424 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:32.667436 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:32.683777 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:32.683808 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:32.750802 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:32.740355   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.741255   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.742960   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.743298   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.746651   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:32.740355   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.741255   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.742960   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.743298   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:32.746651   11446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:32.750824 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:32.750838 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:32.776516 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:32.776551 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:32.809331 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:32.809358 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:35.374938 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:35.387203 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:35.387276 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:35.412099 1542350 cri.go:89] found id: ""
	I1213 16:17:35.412124 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.412133 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:35.412139 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:35.412195 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:35.436994 1542350 cri.go:89] found id: ""
	I1213 16:17:35.437031 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.437040 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:35.437047 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:35.437115 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:35.461531 1542350 cri.go:89] found id: ""
	I1213 16:17:35.461554 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.461562 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:35.461568 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:35.461627 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:35.486070 1542350 cri.go:89] found id: ""
	I1213 16:17:35.486095 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.486105 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:35.486118 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:35.486176 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:35.515476 1542350 cri.go:89] found id: ""
	I1213 16:17:35.515501 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.515510 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:35.515516 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:35.515576 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:35.545886 1542350 cri.go:89] found id: ""
	I1213 16:17:35.545959 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.545995 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:35.546020 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:35.546110 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:35.575465 1542350 cri.go:89] found id: ""
	I1213 16:17:35.575489 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.575498 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:35.575504 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:35.575563 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:35.607235 1542350 cri.go:89] found id: ""
	I1213 16:17:35.607264 1542350 logs.go:282] 0 containers: []
	W1213 16:17:35.607273 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:35.607282 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:35.607294 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:35.671811 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:35.671850 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:35.687939 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:35.687972 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:35.751714 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:35.742668   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.743226   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.744917   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.745517   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.747275   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:35.742668   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.743226   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.744917   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.745517   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:35.747275   11564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:35.751733 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:35.751746 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:35.777517 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:35.777554 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:38.308841 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:38.319569 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:38.319645 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:38.344249 1542350 cri.go:89] found id: ""
	I1213 16:17:38.344276 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.344285 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:38.344291 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:38.344349 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:38.368637 1542350 cri.go:89] found id: ""
	I1213 16:17:38.368666 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.368676 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:38.368682 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:38.368746 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:38.397310 1542350 cri.go:89] found id: ""
	I1213 16:17:38.397335 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.397344 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:38.397350 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:38.397409 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:38.426892 1542350 cri.go:89] found id: ""
	I1213 16:17:38.426967 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.426989 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:38.427008 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:38.427091 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:38.451400 1542350 cri.go:89] found id: ""
	I1213 16:17:38.451423 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.451432 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:38.451437 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:38.451500 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:38.476411 1542350 cri.go:89] found id: ""
	I1213 16:17:38.476433 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.476441 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:38.476448 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:38.476506 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:38.502060 1542350 cri.go:89] found id: ""
	I1213 16:17:38.502083 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.502092 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:38.502098 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:38.502158 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:38.527156 1542350 cri.go:89] found id: ""
	I1213 16:17:38.527217 1542350 logs.go:282] 0 containers: []
	W1213 16:17:38.527240 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:38.527264 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:38.527289 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:38.583123 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:38.583161 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:38.606934 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:38.607014 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:38.678774 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:38.669944   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.670742   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.672406   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.672726   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.674153   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:38.669944   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.670742   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.672406   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.672726   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:38.674153   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:38.678794 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:38.678806 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:38.703623 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:38.703656 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:41.235499 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:41.246098 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:41.246199 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:41.272817 1542350 cri.go:89] found id: ""
	I1213 16:17:41.272884 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.272907 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:41.272921 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:41.272995 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:41.297573 1542350 cri.go:89] found id: ""
	I1213 16:17:41.297599 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.297608 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:41.297614 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:41.297722 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:41.325595 1542350 cri.go:89] found id: ""
	I1213 16:17:41.325663 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.325695 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:41.325708 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:41.325784 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:41.350495 1542350 cri.go:89] found id: ""
	I1213 16:17:41.350519 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.350528 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:41.350534 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:41.350593 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:41.374833 1542350 cri.go:89] found id: ""
	I1213 16:17:41.374860 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.374869 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:41.374874 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:41.374931 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:41.400881 1542350 cri.go:89] found id: ""
	I1213 16:17:41.400911 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.400920 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:41.400926 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:41.400983 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:41.425159 1542350 cri.go:89] found id: ""
	I1213 16:17:41.425182 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.425191 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:41.425197 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:41.425255 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:41.449690 1542350 cri.go:89] found id: ""
	I1213 16:17:41.449765 1542350 logs.go:282] 0 containers: []
	W1213 16:17:41.449788 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:41.449808 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:41.449845 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:41.465414 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:41.465441 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:41.531758 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:41.522350   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.523854   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.524927   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.526433   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.526949   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:41.522350   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.523854   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.524927   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.526433   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:41.526949   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:41.531782 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:41.531795 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:41.557072 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:41.557104 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:41.589367 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:41.589397 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:44.161155 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:44.173267 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:44.173342 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:44.202655 1542350 cri.go:89] found id: ""
	I1213 16:17:44.202682 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.202692 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:44.202699 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:44.202758 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:44.227871 1542350 cri.go:89] found id: ""
	I1213 16:17:44.227897 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.227905 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:44.227911 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:44.227972 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:44.253446 1542350 cri.go:89] found id: ""
	I1213 16:17:44.253473 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.253481 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:44.253487 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:44.253543 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:44.279358 1542350 cri.go:89] found id: ""
	I1213 16:17:44.279383 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.279392 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:44.279398 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:44.279464 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:44.303249 1542350 cri.go:89] found id: ""
	I1213 16:17:44.303275 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.303284 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:44.303344 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:44.303410 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:44.327446 1542350 cri.go:89] found id: ""
	I1213 16:17:44.327471 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.327480 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:44.327486 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:44.327546 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:44.353767 1542350 cri.go:89] found id: ""
	I1213 16:17:44.353793 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.353802 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:44.353808 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:44.353865 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:44.382033 1542350 cri.go:89] found id: ""
	I1213 16:17:44.382060 1542350 logs.go:282] 0 containers: []
	W1213 16:17:44.382068 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:44.382078 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:44.382089 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:44.436599 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:44.436634 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:44.452268 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:44.452298 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:44.515099 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:44.507045   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.507744   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.509208   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.509703   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.511153   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:44.507045   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.507744   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.509208   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.509703   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:44.511153   11901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:44.515122 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:44.515134 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:44.540023 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:44.540059 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:47.069691 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:47.080543 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:47.080615 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:47.114986 1542350 cri.go:89] found id: ""
	I1213 16:17:47.115062 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.115085 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:47.115103 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:47.115194 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:47.148767 1542350 cri.go:89] found id: ""
	I1213 16:17:47.148840 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.148850 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:47.148857 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:47.148931 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:47.174407 1542350 cri.go:89] found id: ""
	I1213 16:17:47.174436 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.174445 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:47.174452 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:47.175791 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:47.207990 1542350 cri.go:89] found id: ""
	I1213 16:17:47.208024 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.208034 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:47.208041 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:47.208115 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:47.232910 1542350 cri.go:89] found id: ""
	I1213 16:17:47.232938 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.232947 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:47.232953 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:47.233015 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:47.256927 1542350 cri.go:89] found id: ""
	I1213 16:17:47.256952 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.256961 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:47.256967 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:47.257049 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:47.285254 1542350 cri.go:89] found id: ""
	I1213 16:17:47.285281 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.285290 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:47.285296 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:47.285356 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:47.309997 1542350 cri.go:89] found id: ""
	I1213 16:17:47.310027 1542350 logs.go:282] 0 containers: []
	W1213 16:17:47.310037 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:47.310046 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:47.310060 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:47.326038 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:47.326073 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:47.390775 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:47.383176   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.383650   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.385126   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.385506   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.386933   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:47.383176   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.383650   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.385126   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.385506   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:47.386933   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:47.390796 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:47.390809 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:47.415331 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:47.415362 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:47.442477 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:47.442503 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:50.000902 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:50.015948 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:50.016030 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:50.046794 1542350 cri.go:89] found id: ""
	I1213 16:17:50.046819 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.046827 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:50.046834 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:50.046890 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:50.073072 1542350 cri.go:89] found id: ""
	I1213 16:17:50.073106 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.073116 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:50.073124 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:50.073186 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:50.111358 1542350 cri.go:89] found id: ""
	I1213 16:17:50.111384 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.111393 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:50.111403 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:50.111468 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:50.141482 1542350 cri.go:89] found id: ""
	I1213 16:17:50.141510 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.141519 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:50.141525 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:50.141584 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:50.168684 1542350 cri.go:89] found id: ""
	I1213 16:17:50.168711 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.168720 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:50.168727 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:50.168806 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:50.194609 1542350 cri.go:89] found id: ""
	I1213 16:17:50.194633 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.194642 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:50.194648 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:50.194708 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:50.220707 1542350 cri.go:89] found id: ""
	I1213 16:17:50.220732 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.220741 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:50.220746 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:50.220810 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:50.245930 1542350 cri.go:89] found id: ""
	I1213 16:17:50.245956 1542350 logs.go:282] 0 containers: []
	W1213 16:17:50.245965 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:50.245975 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:50.245987 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:50.301111 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:50.301147 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:50.317024 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:50.317051 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:50.379354 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:50.370376   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.371063   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.372767   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.373333   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.375036   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:50.370376   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.371063   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.372767   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.373333   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:50.375036   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:50.379375 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:50.379388 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:50.403891 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:50.403925 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:52.933071 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:52.944075 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:52.944148 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:52.969292 1542350 cri.go:89] found id: ""
	I1213 16:17:52.969318 1542350 logs.go:282] 0 containers: []
	W1213 16:17:52.969327 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:52.969333 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:52.969393 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:52.997688 1542350 cri.go:89] found id: ""
	I1213 16:17:52.997717 1542350 logs.go:282] 0 containers: []
	W1213 16:17:52.997727 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:52.997733 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:52.997795 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:53.024102 1542350 cri.go:89] found id: ""
	I1213 16:17:53.024134 1542350 logs.go:282] 0 containers: []
	W1213 16:17:53.024144 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:53.024150 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:53.024214 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:53.054126 1542350 cri.go:89] found id: ""
	I1213 16:17:53.054149 1542350 logs.go:282] 0 containers: []
	W1213 16:17:53.054159 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:53.054165 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:53.054227 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:53.078840 1542350 cri.go:89] found id: ""
	I1213 16:17:53.078918 1542350 logs.go:282] 0 containers: []
	W1213 16:17:53.078940 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:53.078958 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:53.079041 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:53.134282 1542350 cri.go:89] found id: ""
	I1213 16:17:53.134313 1542350 logs.go:282] 0 containers: []
	W1213 16:17:53.134326 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:53.134332 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:53.134401 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:53.170263 1542350 cri.go:89] found id: ""
	I1213 16:17:53.170287 1542350 logs.go:282] 0 containers: []
	W1213 16:17:53.170296 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:53.170302 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:53.170366 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:53.195555 1542350 cri.go:89] found id: ""
	I1213 16:17:53.195578 1542350 logs.go:282] 0 containers: []
	W1213 16:17:53.195587 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:53.195596 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:53.195612 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:53.221475 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:53.221510 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:53.256145 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:53.256172 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:53.312142 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:53.312178 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:53.328755 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:53.328784 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:53.392981 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:53.384949   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.385331   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.386801   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.387094   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.388517   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:53.384949   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.385331   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.386801   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.387094   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:53.388517   12254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:55.894678 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:55.905837 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:55.905910 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:55.931137 1542350 cri.go:89] found id: ""
	I1213 16:17:55.931159 1542350 logs.go:282] 0 containers: []
	W1213 16:17:55.931168 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:55.931175 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:55.931236 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:55.955775 1542350 cri.go:89] found id: ""
	I1213 16:17:55.955801 1542350 logs.go:282] 0 containers: []
	W1213 16:17:55.955810 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:55.955817 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:55.955877 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:55.981227 1542350 cri.go:89] found id: ""
	I1213 16:17:55.981253 1542350 logs.go:282] 0 containers: []
	W1213 16:17:55.981262 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:55.981268 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:55.981329 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:56.008866 1542350 cri.go:89] found id: ""
	I1213 16:17:56.008892 1542350 logs.go:282] 0 containers: []
	W1213 16:17:56.008902 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:56.008909 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:56.008975 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:56.035606 1542350 cri.go:89] found id: ""
	I1213 16:17:56.035635 1542350 logs.go:282] 0 containers: []
	W1213 16:17:56.035644 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:56.035650 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:56.035712 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:56.061753 1542350 cri.go:89] found id: ""
	I1213 16:17:56.061780 1542350 logs.go:282] 0 containers: []
	W1213 16:17:56.061789 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:56.061795 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:56.061858 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:56.099036 1542350 cri.go:89] found id: ""
	I1213 16:17:56.099065 1542350 logs.go:282] 0 containers: []
	W1213 16:17:56.099074 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:56.099081 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:56.099142 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:56.133464 1542350 cri.go:89] found id: ""
	I1213 16:17:56.133491 1542350 logs.go:282] 0 containers: []
	W1213 16:17:56.133500 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:56.133510 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:56.133522 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:56.155287 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:56.155412 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:56.223561 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:56.214782   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.215583   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.217326   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.217944   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.219582   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:56.214782   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.215583   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.217326   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.217944   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:56.219582   12349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:56.223629 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:56.223650 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:56.249923 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:56.249965 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:17:56.280662 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:56.280692 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:58.836837 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:17:58.848594 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:17:58.848659 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:17:58.881904 1542350 cri.go:89] found id: ""
	I1213 16:17:58.881927 1542350 logs.go:282] 0 containers: []
	W1213 16:17:58.881935 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:17:58.881941 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:17:58.882001 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:17:58.917932 1542350 cri.go:89] found id: ""
	I1213 16:17:58.917954 1542350 logs.go:282] 0 containers: []
	W1213 16:17:58.917963 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:17:58.917969 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:17:58.918028 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:17:58.945580 1542350 cri.go:89] found id: ""
	I1213 16:17:58.945653 1542350 logs.go:282] 0 containers: []
	W1213 16:17:58.945668 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:17:58.945676 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:17:58.945753 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:17:58.971398 1542350 cri.go:89] found id: ""
	I1213 16:17:58.971424 1542350 logs.go:282] 0 containers: []
	W1213 16:17:58.971434 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:17:58.971440 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:17:58.971503 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:17:59.001302 1542350 cri.go:89] found id: ""
	I1213 16:17:59.001329 1542350 logs.go:282] 0 containers: []
	W1213 16:17:59.001339 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:17:59.001345 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:17:59.001409 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:17:59.028353 1542350 cri.go:89] found id: ""
	I1213 16:17:59.028379 1542350 logs.go:282] 0 containers: []
	W1213 16:17:59.028388 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:17:59.028394 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:17:59.028470 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:17:59.052548 1542350 cri.go:89] found id: ""
	I1213 16:17:59.052577 1542350 logs.go:282] 0 containers: []
	W1213 16:17:59.052586 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:17:59.052593 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:17:59.052653 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:17:59.077515 1542350 cri.go:89] found id: ""
	I1213 16:17:59.077541 1542350 logs.go:282] 0 containers: []
	W1213 16:17:59.077550 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:17:59.077560 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:17:59.077571 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:17:59.141173 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:17:59.141249 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:17:59.158291 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:17:59.158371 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:17:59.225799 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:17:59.216416   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.217255   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.219459   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.220025   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.221754   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:17:59.216416   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.217255   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.219459   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.220025   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:17:59.221754   12463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:17:59.225867 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:17:59.225890 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:17:59.251561 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:17:59.251597 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:01.784053 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:01.795325 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:01.795393 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:01.819579 1542350 cri.go:89] found id: ""
	I1213 16:18:01.819605 1542350 logs.go:282] 0 containers: []
	W1213 16:18:01.819615 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:01.819622 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:01.819683 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:01.857561 1542350 cri.go:89] found id: ""
	I1213 16:18:01.857588 1542350 logs.go:282] 0 containers: []
	W1213 16:18:01.857597 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:01.857604 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:01.857668 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:01.893605 1542350 cri.go:89] found id: ""
	I1213 16:18:01.893633 1542350 logs.go:282] 0 containers: []
	W1213 16:18:01.893642 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:01.893650 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:01.893706 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:01.931676 1542350 cri.go:89] found id: ""
	I1213 16:18:01.931783 1542350 logs.go:282] 0 containers: []
	W1213 16:18:01.931803 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:01.931812 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:01.931935 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:01.959175 1542350 cri.go:89] found id: ""
	I1213 16:18:01.959249 1542350 logs.go:282] 0 containers: []
	W1213 16:18:01.959272 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:01.959292 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:01.959398 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:01.984753 1542350 cri.go:89] found id: ""
	I1213 16:18:01.984784 1542350 logs.go:282] 0 containers: []
	W1213 16:18:01.984794 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:01.984800 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:01.984865 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:02.016830 1542350 cri.go:89] found id: ""
	I1213 16:18:02.016860 1542350 logs.go:282] 0 containers: []
	W1213 16:18:02.016870 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:02.016876 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:02.016939 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:02.042747 1542350 cri.go:89] found id: ""
	I1213 16:18:02.042776 1542350 logs.go:282] 0 containers: []
	W1213 16:18:02.042785 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:02.042794 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:02.042805 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:02.101057 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:02.101093 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:02.118948 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:02.118972 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:02.188051 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:02.179066   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.180018   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.180758   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.182317   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.182771   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:02.179066   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.180018   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.180758   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.182317   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:02.182771   12580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:02.188077 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:02.188091 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:02.214276 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:02.214316 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:04.742630 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:04.753656 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:04.753725 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:04.779281 1542350 cri.go:89] found id: ""
	I1213 16:18:04.779338 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.779349 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:04.779355 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:04.779418 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:04.806060 1542350 cri.go:89] found id: ""
	I1213 16:18:04.806099 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.806108 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:04.806114 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:04.806195 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:04.831390 1542350 cri.go:89] found id: ""
	I1213 16:18:04.831416 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.831425 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:04.831432 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:04.831501 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:04.865636 1542350 cri.go:89] found id: ""
	I1213 16:18:04.865663 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.865673 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:04.865680 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:04.865746 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:04.893812 1542350 cri.go:89] found id: ""
	I1213 16:18:04.893836 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.893845 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:04.893851 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:04.893916 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:04.922033 1542350 cri.go:89] found id: ""
	I1213 16:18:04.922062 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.922071 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:04.922077 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:04.922135 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:04.952026 1542350 cri.go:89] found id: ""
	I1213 16:18:04.952052 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.952061 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:04.952068 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:04.952129 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:04.979878 1542350 cri.go:89] found id: ""
	I1213 16:18:04.979901 1542350 logs.go:282] 0 containers: []
	W1213 16:18:04.979910 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:04.979919 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:04.979931 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:05.038448 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:05.038485 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:05.055056 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:05.055086 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:05.138791 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:05.128391   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.129071   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.130643   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.131070   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.134791   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:05.128391   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.129071   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.130643   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.131070   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:05.134791   12692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:05.138815 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:05.138828 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:05.170511 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:05.170549 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:07.701516 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:07.711811 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:07.711881 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:07.737115 1542350 cri.go:89] found id: ""
	I1213 16:18:07.737139 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.737148 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:07.737154 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:07.737216 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:07.761282 1542350 cri.go:89] found id: ""
	I1213 16:18:07.761305 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.761313 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:07.761319 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:07.761375 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:07.788777 1542350 cri.go:89] found id: ""
	I1213 16:18:07.788804 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.788813 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:07.788828 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:07.788893 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:07.813606 1542350 cri.go:89] found id: ""
	I1213 16:18:07.813633 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.813642 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:07.813650 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:07.813762 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:07.846070 1542350 cri.go:89] found id: ""
	I1213 16:18:07.846100 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.846109 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:07.846115 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:07.846178 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:07.877868 1542350 cri.go:89] found id: ""
	I1213 16:18:07.877894 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.877903 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:07.877909 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:07.877978 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:07.906297 1542350 cri.go:89] found id: ""
	I1213 16:18:07.906322 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.906331 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:07.906337 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:07.906411 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:07.935165 1542350 cri.go:89] found id: ""
	I1213 16:18:07.935191 1542350 logs.go:282] 0 containers: []
	W1213 16:18:07.935200 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:07.935209 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:07.935221 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:07.990632 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:07.990666 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:08.006620 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:08.006668 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:08.074292 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:08.065222   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.066181   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.067860   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.068200   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.069739   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:08.065222   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.066181   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.067860   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.068200   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:08.069739   12806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:08.074313 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:08.074338 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:08.103200 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:08.103236 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:10.643571 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:10.654051 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:10.654120 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:10.678184 1542350 cri.go:89] found id: ""
	I1213 16:18:10.678213 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.678222 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:10.678229 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:10.678286 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:10.714102 1542350 cri.go:89] found id: ""
	I1213 16:18:10.714129 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.714137 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:10.714143 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:10.714204 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:10.738091 1542350 cri.go:89] found id: ""
	I1213 16:18:10.738114 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.738123 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:10.738129 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:10.738187 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:10.762969 1542350 cri.go:89] found id: ""
	I1213 16:18:10.762996 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.763005 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:10.763010 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:10.763068 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:10.788695 1542350 cri.go:89] found id: ""
	I1213 16:18:10.788718 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.788726 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:10.788732 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:10.788790 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:10.813304 1542350 cri.go:89] found id: ""
	I1213 16:18:10.813331 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.813339 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:10.813346 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:10.813404 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:10.840988 1542350 cri.go:89] found id: ""
	I1213 16:18:10.841013 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.841022 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:10.841028 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:10.841085 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:10.872923 1542350 cri.go:89] found id: ""
	I1213 16:18:10.872947 1542350 logs.go:282] 0 containers: []
	W1213 16:18:10.872957 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:10.872966 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:10.872978 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:10.913313 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:10.913342 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:10.970044 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:10.970079 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:10.986369 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:10.986399 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:11.056440 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:11.047528   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.048477   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.050273   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.050853   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.052407   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:11.047528   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.048477   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.050273   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.050853   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:11.052407   12931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:11.056461 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:11.056474 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:13.582630 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:13.593495 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:13.593570 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:13.618406 1542350 cri.go:89] found id: ""
	I1213 16:18:13.618429 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.618438 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:13.618444 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:13.618503 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:13.643366 1542350 cri.go:89] found id: ""
	I1213 16:18:13.643392 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.643401 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:13.643407 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:13.643470 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:13.668878 1542350 cri.go:89] found id: ""
	I1213 16:18:13.668903 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.668912 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:13.668918 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:13.668976 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:13.694282 1542350 cri.go:89] found id: ""
	I1213 16:18:13.694309 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.694318 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:13.694324 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:13.694383 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:13.722288 1542350 cri.go:89] found id: ""
	I1213 16:18:13.722318 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.722326 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:13.722332 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:13.722391 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:13.749131 1542350 cri.go:89] found id: ""
	I1213 16:18:13.749156 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.749165 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:13.749177 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:13.749234 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:13.772877 1542350 cri.go:89] found id: ""
	I1213 16:18:13.772905 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.772915 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:13.772924 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:13.773024 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:13.797195 1542350 cri.go:89] found id: ""
	I1213 16:18:13.797222 1542350 logs.go:282] 0 containers: []
	W1213 16:18:13.797232 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:13.797241 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:13.797253 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:13.875404 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:13.862729   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.863769   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.867326   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.869105   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.869716   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:13.862729   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.863769   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.867326   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.869105   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:13.869716   13022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:13.875426 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:13.875439 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:13.907083 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:13.907122 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:13.940383 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:13.940412 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:13.999033 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:13.999073 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:16.517512 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:16.531616 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:16.531687 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:16.555921 1542350 cri.go:89] found id: ""
	I1213 16:18:16.555944 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.555952 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:16.555958 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:16.556017 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:16.585501 1542350 cri.go:89] found id: ""
	I1213 16:18:16.585523 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.585532 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:16.585538 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:16.585597 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:16.609776 1542350 cri.go:89] found id: ""
	I1213 16:18:16.609800 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.609810 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:16.609815 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:16.609874 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:16.633727 1542350 cri.go:89] found id: ""
	I1213 16:18:16.633801 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.633828 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:16.633847 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:16.633919 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:16.663010 1542350 cri.go:89] found id: ""
	I1213 16:18:16.663034 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.663042 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:16.663048 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:16.663104 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:16.689483 1542350 cri.go:89] found id: ""
	I1213 16:18:16.689506 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.689514 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:16.689521 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:16.689579 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:16.713920 1542350 cri.go:89] found id: ""
	I1213 16:18:16.713946 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.713955 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:16.713963 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:16.714023 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:16.739270 1542350 cri.go:89] found id: ""
	I1213 16:18:16.739297 1542350 logs.go:282] 0 containers: []
	W1213 16:18:16.739366 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:16.739377 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:16.739391 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:16.805237 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:16.796686   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.797427   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.799164   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.799670   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.801345   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:16.796686   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.797427   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.799164   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.799670   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:16.801345   13135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:16.805260 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:16.805272 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:16.830391 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:16.830421 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:16.875174 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:16.875203 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:16.940670 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:16.940707 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:19.457858 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:19.469305 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 16:18:19.469382 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 16:18:19.494702 1542350 cri.go:89] found id: ""
	I1213 16:18:19.494728 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.494739 1542350 logs.go:284] No container was found matching "kube-apiserver"
	I1213 16:18:19.494745 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 16:18:19.494805 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 16:18:19.526787 1542350 cri.go:89] found id: ""
	I1213 16:18:19.526811 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.526820 1542350 logs.go:284] No container was found matching "etcd"
	I1213 16:18:19.526826 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 16:18:19.526892 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 16:18:19.553929 1542350 cri.go:89] found id: ""
	I1213 16:18:19.553952 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.553961 1542350 logs.go:284] No container was found matching "coredns"
	I1213 16:18:19.553967 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 16:18:19.554025 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 16:18:19.578994 1542350 cri.go:89] found id: ""
	I1213 16:18:19.579021 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.579029 1542350 logs.go:284] No container was found matching "kube-scheduler"
	I1213 16:18:19.579036 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 16:18:19.579094 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 16:18:19.605160 1542350 cri.go:89] found id: ""
	I1213 16:18:19.605184 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.605202 1542350 logs.go:284] No container was found matching "kube-proxy"
	I1213 16:18:19.605209 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 16:18:19.605271 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 16:18:19.629853 1542350 cri.go:89] found id: ""
	I1213 16:18:19.629880 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.629889 1542350 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 16:18:19.629896 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 16:18:19.629963 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 16:18:19.654551 1542350 cri.go:89] found id: ""
	I1213 16:18:19.654578 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.654588 1542350 logs.go:284] No container was found matching "kindnet"
	I1213 16:18:19.654594 1542350 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 16:18:19.654674 1542350 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 16:18:19.679386 1542350 cri.go:89] found id: ""
	I1213 16:18:19.679410 1542350 logs.go:282] 0 containers: []
	W1213 16:18:19.679420 1542350 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 16:18:19.679429 1542350 logs.go:123] Gathering logs for containerd ...
	I1213 16:18:19.679440 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 16:18:19.704792 1542350 logs.go:123] Gathering logs for container status ...
	I1213 16:18:19.704824 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 16:18:19.733848 1542350 logs.go:123] Gathering logs for kubelet ...
	I1213 16:18:19.733877 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 16:18:19.789321 1542350 logs.go:123] Gathering logs for dmesg ...
	I1213 16:18:19.789357 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 16:18:19.805414 1542350 logs.go:123] Gathering logs for describe nodes ...
	I1213 16:18:19.805442 1542350 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 16:18:19.893754 1542350 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:19.884877   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.886081   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.886716   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.888200   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.888520   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 16:18:19.884877   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.886081   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.886716   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.888200   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:19.888520   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 16:18:22.394654 1542350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:18:22.408580 1542350 out.go:203] 
	W1213 16:18:22.411606 1542350 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 16:18:22.411646 1542350 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 16:18:22.411657 1542350 out.go:285] * Related issues:
	W1213 16:18:22.411669 1542350 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1213 16:18:22.411682 1542350 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1213 16:18:22.414454 1542350 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.172900077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.172913106Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.172962434Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.172980173Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.172991151Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.173001884Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.173012173Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.173023233Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.173045772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.173088831Z" level=info msg="Connect containerd service"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.173368570Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.174111740Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.184422184Z" level=info msg="Start subscribing containerd event"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.184638121Z" level=info msg="Start recovering state"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.184605425Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.184847954Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.221873894Z" level=info msg="Start event monitor"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.221935570Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.221945818Z" level=info msg="Start streaming server"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.221955041Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.221964312Z" level=info msg="runtime interface starting up..."
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.221971163Z" level=info msg="starting plugins..."
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.222006157Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 16:12:20 newest-cni-526531 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 16:12:20 newest-cni-526531 containerd[554]: time="2025-12-13T16:12:20.224181983Z" level=info msg="containerd successfully booted in 0.088659s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:18:35.317495   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:35.318017   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:35.319735   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:35.320168   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:18:35.321651   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +18.759368] overlayfs: idmapped layers are currently not supported
	[Dec13 13:37] overlayfs: idmapped layers are currently not supported
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 16:18:35 up  8:01,  0 user,  load average: 1.16, 0.80, 1.06
	Linux newest-cni-526531 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 16:18:32 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:18:33 newest-cni-526531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
	Dec 13 16:18:33 newest-cni-526531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:18:33 newest-cni-526531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:18:33 newest-cni-526531 kubelet[13785]: E1213 16:18:33.192326   13785 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:18:33 newest-cni-526531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:18:33 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:18:33 newest-cni-526531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
	Dec 13 16:18:33 newest-cni-526531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:18:33 newest-cni-526531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:18:33 newest-cni-526531 kubelet[13818]: E1213 16:18:33.901365   13818 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:18:33 newest-cni-526531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:18:33 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:18:34 newest-cni-526531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
	Dec 13 16:18:34 newest-cni-526531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:18:34 newest-cni-526531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:18:34 newest-cni-526531 kubelet[13826]: E1213 16:18:34.645480   13826 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:18:34 newest-cni-526531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:18:34 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:18:35 newest-cni-526531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
	Dec 13 16:18:35 newest-cni-526531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:18:35 newest-cni-526531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:18:35 newest-cni-526531 kubelet[13919]: E1213 16:18:35.388502   13919 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:18:35 newest-cni-526531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:18:35 newest-cni-526531 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-526531 -n newest-cni-526531
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-526531 -n newest-cni-526531: exit status 2 (411.726393ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-526531" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (9.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (258.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 16:20:18.171677 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 16:20:38.304372 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/default-k8s-diff-port-946932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 16:22:01.375550 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/default-k8s-diff-port-946932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1213 16:22:37.798510 1252934 config.go:182] Loaded profile config "calico-023791": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 16:23:23.671516 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 16:23:42.552832 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-439544 -n no-preload-439544
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-439544 -n no-preload-439544: exit status 2 (378.069527ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-439544" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-439544 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-439544 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.076µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-439544 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-439544
helpers_test.go:244: (dbg) docker inspect no-preload-439544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d",
	        "Created": "2025-12-13T15:54:12.178460013Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1532771,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T16:04:42.677982497Z",
	            "FinishedAt": "2025-12-13T16:04:41.261584549Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/hostname",
	        "HostsPath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/hosts",
	        "LogPath": "/var/lib/docker/containers/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d/53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d-json.log",
	        "Name": "/no-preload-439544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-439544:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-439544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53d57adb0653e5d983be32648ea483875a844ed3c89933a5cc08ecb6f22f575d",
	                "LowerDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/259239316cc639b15a45ba2a306d1aad60e1efa2612fd958797938594b8a8ce3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-439544",
	                "Source": "/var/lib/docker/volumes/no-preload-439544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-439544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-439544",
	                "name.minikube.sigs.k8s.io": "no-preload-439544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4dced35fb175add3b26a40dff982545ee75f124f4735db30543f89845b336b1c",
	            "SandboxKey": "/var/run/docker/netns/4dced35fb175",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34228"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34229"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34232"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34230"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34231"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-439544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:74:3b:fa:0b:79",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "77a202cfc0163f00e436e53caf7341626192055eeb5da6a6f5d953ced7f7adfb",
	                    "EndpointID": "7084aedd50f3a2db715b196cf320f0078e1627ae582576065d327fcc3de1e2ca",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-439544",
	                        "53d57adb0653"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-439544 -n no-preload-439544
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-439544 -n no-preload-439544: exit status 2 (327.195227ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-439544 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-439544 logs -n 25: (1.052346437s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p calico-023791 sudo systemctl cat kubelet --no-pager                                                                                                                   │ calico-023791         │ jenkins │ v1.37.0 │ 13 Dec 25 16:22 UTC │ 13 Dec 25 16:22 UTC │
	│ ssh     │ -p calico-023791 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                    │ calico-023791         │ jenkins │ v1.37.0 │ 13 Dec 25 16:22 UTC │ 13 Dec 25 16:22 UTC │
	│ ssh     │ -p calico-023791 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                   │ calico-023791         │ jenkins │ v1.37.0 │ 13 Dec 25 16:22 UTC │ 13 Dec 25 16:23 UTC │
	│ ssh     │ -p calico-023791 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ calico-023791         │ jenkins │ v1.37.0 │ 13 Dec 25 16:23 UTC │ 13 Dec 25 16:23 UTC │
	│ ssh     │ -p calico-023791 sudo systemctl status docker --all --full --no-pager                                                                                                    │ calico-023791         │ jenkins │ v1.37.0 │ 13 Dec 25 16:23 UTC │                     │
	│ ssh     │ -p calico-023791 sudo systemctl cat docker --no-pager                                                                                                                    │ calico-023791         │ jenkins │ v1.37.0 │ 13 Dec 25 16:23 UTC │ 13 Dec 25 16:23 UTC │
	│ ssh     │ -p calico-023791 sudo cat /etc/docker/daemon.json                                                                                                                        │ calico-023791         │ jenkins │ v1.37.0 │ 13 Dec 25 16:23 UTC │                     │
	│ ssh     │ -p calico-023791 sudo docker system info                                                                                                                                 │ calico-023791         │ jenkins │ v1.37.0 │ 13 Dec 25 16:23 UTC │                     │
	│ ssh     │ -p calico-023791 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ calico-023791         │ jenkins │ v1.37.0 │ 13 Dec 25 16:23 UTC │                     │
	│ ssh     │ -p calico-023791 sudo systemctl cat cri-docker --no-pager                                                                                                                │ calico-023791         │ jenkins │ v1.37.0 │ 13 Dec 25 16:23 UTC │ 13 Dec 25 16:23 UTC │
	│ ssh     │ -p calico-023791 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ calico-023791         │ jenkins │ v1.37.0 │ 13 Dec 25 16:23 UTC │                     │
	│ ssh     │ -p calico-023791 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ calico-023791         │ jenkins │ v1.37.0 │ 13 Dec 25 16:23 UTC │ 13 Dec 25 16:23 UTC │
	│ ssh     │ -p calico-023791 sudo cri-dockerd --version                                                                                                                              │ calico-023791         │ jenkins │ v1.37.0 │ 13 Dec 25 16:23 UTC │ 13 Dec 25 16:23 UTC │
	│ ssh     │ -p calico-023791 sudo systemctl status containerd --all --full --no-pager                                                                                                │ calico-023791         │ jenkins │ v1.37.0 │ 13 Dec 25 16:23 UTC │ 13 Dec 25 16:23 UTC │
	│ ssh     │ -p calico-023791 sudo systemctl cat containerd --no-pager                                                                                                                │ calico-023791         │ jenkins │ v1.37.0 │ 13 Dec 25 16:23 UTC │ 13 Dec 25 16:23 UTC │
	│ ssh     │ -p calico-023791 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ calico-023791         │ jenkins │ v1.37.0 │ 13 Dec 25 16:23 UTC │ 13 Dec 25 16:23 UTC │
	│ ssh     │ -p calico-023791 sudo cat /etc/containerd/config.toml                                                                                                                    │ calico-023791         │ jenkins │ v1.37.0 │ 13 Dec 25 16:23 UTC │ 13 Dec 25 16:23 UTC │
	│ ssh     │ -p calico-023791 sudo containerd config dump                                                                                                                             │ calico-023791         │ jenkins │ v1.37.0 │ 13 Dec 25 16:23 UTC │ 13 Dec 25 16:23 UTC │
	│ ssh     │ -p calico-023791 sudo systemctl status crio --all --full --no-pager                                                                                                      │ calico-023791         │ jenkins │ v1.37.0 │ 13 Dec 25 16:23 UTC │                     │
	│ ssh     │ -p calico-023791 sudo systemctl cat crio --no-pager                                                                                                                      │ calico-023791         │ jenkins │ v1.37.0 │ 13 Dec 25 16:23 UTC │ 13 Dec 25 16:23 UTC │
	│ ssh     │ -p calico-023791 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ calico-023791         │ jenkins │ v1.37.0 │ 13 Dec 25 16:23 UTC │ 13 Dec 25 16:23 UTC │
	│ ssh     │ -p calico-023791 sudo crio config                                                                                                                                        │ calico-023791         │ jenkins │ v1.37.0 │ 13 Dec 25 16:23 UTC │ 13 Dec 25 16:23 UTC │
	│ delete  │ -p calico-023791                                                                                                                                                         │ calico-023791         │ jenkins │ v1.37.0 │ 13 Dec 25 16:23 UTC │ 13 Dec 25 16:23 UTC │
	│ start   │ -p custom-flannel-023791 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd │ custom-flannel-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:23 UTC │ 13 Dec 25 16:24 UTC │
	│ ssh     │ -p custom-flannel-023791 pgrep -a kubelet                                                                                                                                │ custom-flannel-023791 │ jenkins │ v1.37.0 │ 13 Dec 25 16:24 UTC │ 13 Dec 25 16:24 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 16:23:10
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 16:23:10.031058 1582456 out.go:360] Setting OutFile to fd 1 ...
	I1213 16:23:10.031298 1582456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:23:10.031363 1582456 out.go:374] Setting ErrFile to fd 2...
	I1213 16:23:10.031385 1582456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 16:23:10.031809 1582456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 16:23:10.032428 1582456 out.go:368] Setting JSON to false
	I1213 16:23:10.033852 1582456 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":29139,"bootTime":1765613851,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 16:23:10.033953 1582456 start.go:143] virtualization:  
	I1213 16:23:10.037882 1582456 out.go:179] * [custom-flannel-023791] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 16:23:10.040850 1582456 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 16:23:10.040958 1582456 notify.go:221] Checking for updates...
	I1213 16:23:10.047384 1582456 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 16:23:10.050336 1582456 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:23:10.053427 1582456 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 16:23:10.056402 1582456 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 16:23:10.059435 1582456 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 16:23:10.063045 1582456 config.go:182] Loaded profile config "no-preload-439544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 16:23:10.063198 1582456 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 16:23:10.108784 1582456 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 16:23:10.108988 1582456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:23:10.192324 1582456 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:23:10.180197089 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:23:10.192437 1582456 docker.go:319] overlay module found
	I1213 16:23:10.195730 1582456 out.go:179] * Using the docker driver based on user configuration
	I1213 16:23:10.198543 1582456 start.go:309] selected driver: docker
	I1213 16:23:10.198570 1582456 start.go:927] validating driver "docker" against <nil>
	I1213 16:23:10.198584 1582456 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 16:23:10.199507 1582456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 16:23:10.254769 1582456 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 16:23:10.245154743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 16:23:10.254929 1582456 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 16:23:10.255186 1582456 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 16:23:10.258115 1582456 out.go:179] * Using Docker driver with root privileges
	I1213 16:23:10.261004 1582456 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1213 16:23:10.261055 1582456 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1213 16:23:10.261136 1582456 start.go:353] cluster config:
	{Name:custom-flannel-023791 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-023791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:23:10.264303 1582456 out.go:179] * Starting "custom-flannel-023791" primary control-plane node in "custom-flannel-023791" cluster
	I1213 16:23:10.267147 1582456 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 16:23:10.270040 1582456 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 16:23:10.272823 1582456 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 16:23:10.272869 1582456 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1213 16:23:10.272882 1582456 cache.go:65] Caching tarball of preloaded images
	I1213 16:23:10.272903 1582456 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 16:23:10.272969 1582456 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 16:23:10.272979 1582456 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1213 16:23:10.273091 1582456 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/config.json ...
	I1213 16:23:10.273109 1582456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/config.json: {Name:mk427bc6d907a3caadd6437db28a50427bb1aab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:23:10.292305 1582456 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 16:23:10.292334 1582456 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 16:23:10.292354 1582456 cache.go:243] Successfully downloaded all kic artifacts
	I1213 16:23:10.292386 1582456 start.go:360] acquireMachinesLock for custom-flannel-023791: {Name:mk813907367acfda0dd91e8e97f934731c0f6691 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 16:23:10.292491 1582456 start.go:364] duration metric: took 84.716µs to acquireMachinesLock for "custom-flannel-023791"
	I1213 16:23:10.292521 1582456 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-023791 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-023791 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 16:23:10.292601 1582456 start.go:125] createHost starting for "" (driver="docker")
	I1213 16:23:10.297689 1582456 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 16:23:10.297918 1582456 start.go:159] libmachine.API.Create for "custom-flannel-023791" (driver="docker")
	I1213 16:23:10.297962 1582456 client.go:173] LocalClient.Create starting
	I1213 16:23:10.298029 1582456 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem
	I1213 16:23:10.298074 1582456 main.go:143] libmachine: Decoding PEM data...
	I1213 16:23:10.298094 1582456 main.go:143] libmachine: Parsing certificate...
	I1213 16:23:10.298154 1582456 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem
	I1213 16:23:10.298180 1582456 main.go:143] libmachine: Decoding PEM data...
	I1213 16:23:10.298192 1582456 main.go:143] libmachine: Parsing certificate...
	I1213 16:23:10.298566 1582456 cli_runner.go:164] Run: docker network inspect custom-flannel-023791 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 16:23:10.315481 1582456 cli_runner.go:211] docker network inspect custom-flannel-023791 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 16:23:10.315565 1582456 network_create.go:284] running [docker network inspect custom-flannel-023791] to gather additional debugging logs...
	I1213 16:23:10.315586 1582456 cli_runner.go:164] Run: docker network inspect custom-flannel-023791
	W1213 16:23:10.330571 1582456 cli_runner.go:211] docker network inspect custom-flannel-023791 returned with exit code 1
	I1213 16:23:10.330605 1582456 network_create.go:287] error running [docker network inspect custom-flannel-023791]: docker network inspect custom-flannel-023791: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-023791 not found
	I1213 16:23:10.330620 1582456 network_create.go:289] output of [docker network inspect custom-flannel-023791]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-023791 not found
	
	** /stderr **
	I1213 16:23:10.330741 1582456 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 16:23:10.347726 1582456 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2ccf9a9eb2c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:9e:64:ed:f4:f2} reservation:<nil>}
	I1213 16:23:10.348012 1582456 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f2add06a95dc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:32:3f:19:f0:2f:b1} reservation:<nil>}
	I1213 16:23:10.348263 1582456 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8517ffe6861d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:d6:d4:51:8b:3d} reservation:<nil>}
	I1213 16:23:10.348678 1582456 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019f0430}
	I1213 16:23:10.348701 1582456 network_create.go:124] attempt to create docker network custom-flannel-023791 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 16:23:10.348770 1582456 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-023791 custom-flannel-023791
	I1213 16:23:10.407992 1582456 network_create.go:108] docker network custom-flannel-023791 192.168.76.0/24 created
	I1213 16:23:10.408029 1582456 kic.go:121] calculated static IP "192.168.76.2" for the "custom-flannel-023791" container
	I1213 16:23:10.408123 1582456 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 16:23:10.424861 1582456 cli_runner.go:164] Run: docker volume create custom-flannel-023791 --label name.minikube.sigs.k8s.io=custom-flannel-023791 --label created_by.minikube.sigs.k8s.io=true
	I1213 16:23:10.442403 1582456 oci.go:103] Successfully created a docker volume custom-flannel-023791
	I1213 16:23:10.442494 1582456 cli_runner.go:164] Run: docker run --rm --name custom-flannel-023791-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-023791 --entrypoint /usr/bin/test -v custom-flannel-023791:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 16:23:10.968965 1582456 oci.go:107] Successfully prepared a docker volume custom-flannel-023791
	I1213 16:23:10.969036 1582456 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 16:23:10.969056 1582456 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 16:23:10.969142 1582456 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v custom-flannel-023791:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 16:23:15.608697 1582456 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v custom-flannel-023791:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.639499884s)
	I1213 16:23:15.608736 1582456 kic.go:203] duration metric: took 4.639676643s to extract preloaded images to volume ...
	W1213 16:23:15.608890 1582456 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 16:23:15.608995 1582456 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 16:23:15.664545 1582456 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-023791 --name custom-flannel-023791 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-023791 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-023791 --network custom-flannel-023791 --ip 192.168.76.2 --volume custom-flannel-023791:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 16:23:15.967081 1582456 cli_runner.go:164] Run: docker container inspect custom-flannel-023791 --format={{.State.Running}}
	I1213 16:23:15.987531 1582456 cli_runner.go:164] Run: docker container inspect custom-flannel-023791 --format={{.State.Status}}
	I1213 16:23:16.013153 1582456 cli_runner.go:164] Run: docker exec custom-flannel-023791 stat /var/lib/dpkg/alternatives/iptables
	I1213 16:23:16.076067 1582456 oci.go:144] the created container "custom-flannel-023791" has a running status.
	I1213 16:23:16.076098 1582456 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/custom-flannel-023791/id_rsa...
	I1213 16:23:16.651417 1582456 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/custom-flannel-023791/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 16:23:16.671156 1582456 cli_runner.go:164] Run: docker container inspect custom-flannel-023791 --format={{.State.Status}}
	I1213 16:23:16.688192 1582456 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 16:23:16.688214 1582456 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-023791 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 16:23:16.727007 1582456 cli_runner.go:164] Run: docker container inspect custom-flannel-023791 --format={{.State.Status}}
	I1213 16:23:16.746614 1582456 machine.go:94] provisionDockerMachine start ...
	I1213 16:23:16.746709 1582456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-023791
	I1213 16:23:16.763191 1582456 main.go:143] libmachine: Using SSH client type: native
	I1213 16:23:16.763628 1582456 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34253 <nil> <nil>}
	I1213 16:23:16.763650 1582456 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 16:23:16.764317 1582456 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 16:23:19.919030 1582456 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-023791
	
	I1213 16:23:19.919052 1582456 ubuntu.go:182] provisioning hostname "custom-flannel-023791"
	I1213 16:23:19.919130 1582456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-023791
	I1213 16:23:19.943064 1582456 main.go:143] libmachine: Using SSH client type: native
	I1213 16:23:19.943409 1582456 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34253 <nil> <nil>}
	I1213 16:23:19.943428 1582456 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-023791 && echo "custom-flannel-023791" | sudo tee /etc/hostname
	I1213 16:23:20.114530 1582456 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-023791
	
	I1213 16:23:20.114665 1582456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-023791
	I1213 16:23:20.132578 1582456 main.go:143] libmachine: Using SSH client type: native
	I1213 16:23:20.132906 1582456 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34253 <nil> <nil>}
	I1213 16:23:20.132928 1582456 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-023791' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-023791/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-023791' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 16:23:20.283613 1582456 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 16:23:20.283639 1582456 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
	I1213 16:23:20.283658 1582456 ubuntu.go:190] setting up certificates
	I1213 16:23:20.283693 1582456 provision.go:84] configureAuth start
	I1213 16:23:20.283769 1582456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-023791
	I1213 16:23:20.300781 1582456 provision.go:143] copyHostCerts
	I1213 16:23:20.300858 1582456 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
	I1213 16:23:20.300873 1582456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
	I1213 16:23:20.300954 1582456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
	I1213 16:23:20.301068 1582456 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
	I1213 16:23:20.301080 1582456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
	I1213 16:23:20.301109 1582456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
	I1213 16:23:20.301172 1582456 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
	I1213 16:23:20.301180 1582456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
	I1213 16:23:20.301204 1582456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
	I1213 16:23:20.301266 1582456 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-023791 san=[127.0.0.1 192.168.76.2 custom-flannel-023791 localhost minikube]
	I1213 16:23:20.530303 1582456 provision.go:177] copyRemoteCerts
	I1213 16:23:20.530372 1582456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 16:23:20.530417 1582456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-023791
	I1213 16:23:20.548022 1582456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/custom-flannel-023791/id_rsa Username:docker}
	I1213 16:23:20.655410 1582456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 16:23:20.675636 1582456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1213 16:23:20.693949 1582456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 16:23:20.712168 1582456 provision.go:87] duration metric: took 428.442536ms to configureAuth
	I1213 16:23:20.712197 1582456 ubuntu.go:206] setting minikube options for container-runtime
	I1213 16:23:20.712395 1582456 config.go:182] Loaded profile config "custom-flannel-023791": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 16:23:20.712408 1582456 machine.go:97] duration metric: took 3.965771509s to provisionDockerMachine
	I1213 16:23:20.712414 1582456 client.go:176] duration metric: took 10.4144423s to LocalClient.Create
	I1213 16:23:20.712436 1582456 start.go:167] duration metric: took 10.414518794s to libmachine.API.Create "custom-flannel-023791"
	I1213 16:23:20.712443 1582456 start.go:293] postStartSetup for "custom-flannel-023791" (driver="docker")
	I1213 16:23:20.712457 1582456 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 16:23:20.712515 1582456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 16:23:20.712559 1582456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-023791
	I1213 16:23:20.730683 1582456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/custom-flannel-023791/id_rsa Username:docker}
	I1213 16:23:20.835542 1582456 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 16:23:20.838907 1582456 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 16:23:20.838937 1582456 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 16:23:20.838949 1582456 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
	I1213 16:23:20.839003 1582456 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
	I1213 16:23:20.839082 1582456 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
	I1213 16:23:20.839194 1582456 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 16:23:20.846705 1582456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:23:20.864498 1582456 start.go:296] duration metric: took 152.035495ms for postStartSetup
	I1213 16:23:20.864893 1582456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-023791
	I1213 16:23:20.881316 1582456 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/config.json ...
	I1213 16:23:20.881605 1582456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 16:23:20.881665 1582456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-023791
	I1213 16:23:20.898495 1582456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/custom-flannel-023791/id_rsa Username:docker}
	I1213 16:23:21.001085 1582456 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 16:23:21.006720 1582456 start.go:128] duration metric: took 10.714102024s to createHost
	I1213 16:23:21.006756 1582456 start.go:83] releasing machines lock for "custom-flannel-023791", held for 10.714251896s
	I1213 16:23:21.006851 1582456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-023791
	I1213 16:23:21.024599 1582456 ssh_runner.go:195] Run: cat /version.json
	I1213 16:23:21.024656 1582456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-023791
	I1213 16:23:21.024666 1582456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 16:23:21.024722 1582456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-023791
	I1213 16:23:21.048953 1582456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/custom-flannel-023791/id_rsa Username:docker}
	I1213 16:23:21.050012 1582456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/custom-flannel-023791/id_rsa Username:docker}
	I1213 16:23:21.150954 1582456 ssh_runner.go:195] Run: systemctl --version
	I1213 16:23:21.260799 1582456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 16:23:21.265158 1582456 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 16:23:21.265228 1582456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 16:23:21.294091 1582456 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 16:23:21.294117 1582456 start.go:496] detecting cgroup driver to use...
	I1213 16:23:21.294149 1582456 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 16:23:21.294197 1582456 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 16:23:21.308441 1582456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 16:23:21.321689 1582456 docker.go:218] disabling cri-docker service (if available) ...
	I1213 16:23:21.321773 1582456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 16:23:21.338431 1582456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 16:23:21.358928 1582456 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 16:23:21.484481 1582456 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 16:23:21.612056 1582456 docker.go:234] disabling docker service ...
	I1213 16:23:21.612150 1582456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 16:23:21.632693 1582456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 16:23:21.645832 1582456 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 16:23:21.772712 1582456 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 16:23:21.890148 1582456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 16:23:21.904271 1582456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 16:23:21.917978 1582456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 16:23:21.927570 1582456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 16:23:21.937391 1582456 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 16:23:21.937515 1582456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 16:23:21.947244 1582456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:23:21.956311 1582456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 16:23:21.967441 1582456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 16:23:21.976849 1582456 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 16:23:21.985136 1582456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 16:23:21.994377 1582456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 16:23:22.010255 1582456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 16:23:22.021122 1582456 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 16:23:22.030267 1582456 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 16:23:22.038295 1582456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:23:22.182397 1582456 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 16:23:22.335481 1582456 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 16:23:22.335591 1582456 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 16:23:22.339395 1582456 start.go:564] Will wait 60s for crictl version
	I1213 16:23:22.339474 1582456 ssh_runner.go:195] Run: which crictl
	I1213 16:23:22.342818 1582456 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 16:23:22.370411 1582456 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 16:23:22.370492 1582456 ssh_runner.go:195] Run: containerd --version
	I1213 16:23:22.394180 1582456 ssh_runner.go:195] Run: containerd --version
	I1213 16:23:22.421834 1582456 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1213 16:23:22.424867 1582456 cli_runner.go:164] Run: docker network inspect custom-flannel-023791 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 16:23:22.441541 1582456 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 16:23:22.445449 1582456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:23:22.457685 1582456 kubeadm.go:884] updating cluster {Name:custom-flannel-023791 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-023791 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 16:23:22.457837 1582456 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 16:23:22.457925 1582456 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:23:22.489068 1582456 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:23:22.489097 1582456 containerd.go:534] Images already preloaded, skipping extraction
	I1213 16:23:22.489170 1582456 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 16:23:22.517492 1582456 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 16:23:22.517514 1582456 cache_images.go:86] Images are preloaded, skipping loading
	I1213 16:23:22.517522 1582456 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 containerd true true} ...
	I1213 16:23:22.517633 1582456 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-023791 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-023791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1213 16:23:22.517703 1582456 ssh_runner.go:195] Run: sudo crictl info
	I1213 16:23:22.546524 1582456 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1213 16:23:22.546565 1582456 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 16:23:22.546587 1582456 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-023791 NodeName:custom-flannel-023791 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 16:23:22.546705 1582456 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "custom-flannel-023791"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 16:23:22.546772 1582456 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 16:23:22.559535 1582456 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 16:23:22.559652 1582456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 16:23:22.567366 1582456 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I1213 16:23:22.579927 1582456 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 16:23:22.592898 1582456 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I1213 16:23:22.605773 1582456 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 16:23:22.609376 1582456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 16:23:22.625498 1582456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:23:22.743103 1582456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:23:22.758567 1582456 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791 for IP: 192.168.76.2
	I1213 16:23:22.758631 1582456 certs.go:195] generating shared ca certs ...
	I1213 16:23:22.758663 1582456 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:23:22.758833 1582456 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
	I1213 16:23:22.758920 1582456 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
	I1213 16:23:22.758956 1582456 certs.go:257] generating profile certs ...
	I1213 16:23:22.759034 1582456 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/client.key
	I1213 16:23:22.759071 1582456 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/client.crt with IP's: []
	I1213 16:23:23.446255 1582456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/client.crt ...
	I1213 16:23:23.446290 1582456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/client.crt: {Name:mk9663f9c4763d1104a815f2a77c8999229a5208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:23:23.446493 1582456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/client.key ...
	I1213 16:23:23.446510 1582456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/client.key: {Name:mkedada5058c111c1e0806a899d928db9d99c1e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:23:23.446604 1582456 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/apiserver.key.7ae686be
	I1213 16:23:23.446622 1582456 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/apiserver.crt.7ae686be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1213 16:23:23.639720 1582456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/apiserver.crt.7ae686be ...
	I1213 16:23:23.639753 1582456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/apiserver.crt.7ae686be: {Name:mkef1b54b2f33fb6f9c94610b8c14e94f109f637 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:23:23.639958 1582456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/apiserver.key.7ae686be ...
	I1213 16:23:23.639976 1582456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/apiserver.key.7ae686be: {Name:mkf5a20701053b7b22d44b9c4f633fbd754fd8ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:23:23.640072 1582456 certs.go:382] copying /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/apiserver.crt.7ae686be -> /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/apiserver.crt
	I1213 16:23:23.640156 1582456 certs.go:386] copying /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/apiserver.key.7ae686be -> /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/apiserver.key
	I1213 16:23:23.640222 1582456 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/proxy-client.key
	I1213 16:23:23.640240 1582456 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/proxy-client.crt with IP's: []
	I1213 16:23:23.999534 1582456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/proxy-client.crt ...
	I1213 16:23:23.999567 1582456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/proxy-client.crt: {Name:mkbf159f78f97453701e160c0bc83de371c4c73c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:23:23.999778 1582456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/proxy-client.key ...
	I1213 16:23:23.999804 1582456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/proxy-client.key: {Name:mkb358cc77ed56fae7d79a537d963c07de68fa78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:23:23.999990 1582456 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
	W1213 16:23:24.000037 1582456 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
	I1213 16:23:24.000051 1582456 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 16:23:24.000080 1582456 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
	I1213 16:23:24.000110 1582456 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
	I1213 16:23:24.000135 1582456 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
	I1213 16:23:24.000185 1582456 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
	I1213 16:23:24.000739 1582456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 16:23:24.025823 1582456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 16:23:24.045180 1582456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 16:23:24.064545 1582456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 16:23:24.083779 1582456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1213 16:23:24.102406 1582456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 16:23:24.120548 1582456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 16:23:24.138008 1582456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/custom-flannel-023791/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 16:23:24.155691 1582456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
	I1213 16:23:24.173904 1582456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 16:23:24.191720 1582456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
	I1213 16:23:24.209607 1582456 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 16:23:24.223258 1582456 ssh_runner.go:195] Run: openssl version
	I1213 16:23:24.232610 1582456 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
	I1213 16:23:24.240510 1582456 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
	I1213 16:23:24.249065 1582456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
	I1213 16:23:24.253653 1582456 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
	I1213 16:23:24.253720 1582456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
	I1213 16:23:24.296611 1582456 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 16:23:24.303946 1582456 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12529342.pem /etc/ssl/certs/3ec20f2e.0
	I1213 16:23:24.311411 1582456 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:23:24.319063 1582456 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 16:23:24.326816 1582456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:23:24.330664 1582456 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:23:24.330729 1582456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 16:23:24.408749 1582456 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 16:23:24.416691 1582456 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 16:23:24.424660 1582456 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
	I1213 16:23:24.432539 1582456 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
	I1213 16:23:24.440271 1582456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
	I1213 16:23:24.444135 1582456 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
	I1213 16:23:24.444198 1582456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
	I1213 16:23:24.485419 1582456 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 16:23:24.493212 1582456 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1252934.pem /etc/ssl/certs/51391683.0
	I1213 16:23:24.500647 1582456 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 16:23:24.504074 1582456 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 16:23:24.504126 1582456 kubeadm.go:401] StartCluster: {Name:custom-flannel-023791 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-023791 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Dis
ableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 16:23:24.504205 1582456 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 16:23:24.504268 1582456 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 16:23:24.530539 1582456 cri.go:89] found id: ""
	I1213 16:23:24.530607 1582456 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 16:23:24.538409 1582456 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 16:23:24.546920 1582456 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 16:23:24.546996 1582456 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 16:23:24.554755 1582456 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 16:23:24.554782 1582456 kubeadm.go:158] found existing configuration files:
	
	I1213 16:23:24.554835 1582456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 16:23:24.562830 1582456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 16:23:24.562915 1582456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 16:23:24.570385 1582456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 16:23:24.577905 1582456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 16:23:24.577971 1582456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 16:23:24.585177 1582456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 16:23:24.592724 1582456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 16:23:24.592797 1582456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 16:23:24.600138 1582456 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 16:23:24.607797 1582456 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 16:23:24.607894 1582456 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 16:23:24.615483 1582456 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 16:23:24.676055 1582456 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 16:23:24.676292 1582456 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 16:23:24.752396 1582456 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 16:23:42.437541 1582456 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 16:23:42.437599 1582456 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 16:23:42.437697 1582456 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 16:23:42.437753 1582456 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 16:23:42.437787 1582456 kubeadm.go:319] OS: Linux
	I1213 16:23:42.437832 1582456 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 16:23:42.437880 1582456 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 16:23:42.437927 1582456 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 16:23:42.437975 1582456 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 16:23:42.438023 1582456 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 16:23:42.438071 1582456 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 16:23:42.438116 1582456 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 16:23:42.438164 1582456 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 16:23:42.438210 1582456 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 16:23:42.438282 1582456 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 16:23:42.438381 1582456 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 16:23:42.438477 1582456 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 16:23:42.438543 1582456 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 16:23:42.441767 1582456 out.go:252]   - Generating certificates and keys ...
	I1213 16:23:42.441889 1582456 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 16:23:42.441987 1582456 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 16:23:42.442066 1582456 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 16:23:42.442124 1582456 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 16:23:42.442185 1582456 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 16:23:42.442235 1582456 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 16:23:42.442297 1582456 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 16:23:42.442423 1582456 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-023791 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 16:23:42.442482 1582456 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 16:23:42.442604 1582456 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-023791 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 16:23:42.442669 1582456 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 16:23:42.442732 1582456 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 16:23:42.442775 1582456 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 16:23:42.442831 1582456 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 16:23:42.442881 1582456 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 16:23:42.442938 1582456 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 16:23:42.442993 1582456 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 16:23:42.443057 1582456 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 16:23:42.443111 1582456 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 16:23:42.443193 1582456 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 16:23:42.443266 1582456 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 16:23:42.446416 1582456 out.go:252]   - Booting up control plane ...
	I1213 16:23:42.446542 1582456 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 16:23:42.446625 1582456 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 16:23:42.446701 1582456 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 16:23:42.446825 1582456 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 16:23:42.446981 1582456 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 16:23:42.447103 1582456 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 16:23:42.447189 1582456 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 16:23:42.447229 1582456 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 16:23:42.447394 1582456 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 16:23:42.447500 1582456 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 16:23:42.447557 1582456 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.00162201s
	I1213 16:23:42.447649 1582456 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 16:23:42.447729 1582456 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1213 16:23:42.447817 1582456 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 16:23:42.447896 1582456 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 16:23:42.447970 1582456 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.927963361s
	I1213 16:23:42.448037 1582456 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.057262898s
	I1213 16:23:42.448102 1582456 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.00153351s
	I1213 16:23:42.448207 1582456 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 16:23:42.448332 1582456 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 16:23:42.448402 1582456 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 16:23:42.448590 1582456 kubeadm.go:319] [mark-control-plane] Marking the node custom-flannel-023791 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 16:23:42.448645 1582456 kubeadm.go:319] [bootstrap-token] Using token: tauua8.mto0imkdlzq3ee4p
	I1213 16:23:42.451769 1582456 out.go:252]   - Configuring RBAC rules ...
	I1213 16:23:42.451899 1582456 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 16:23:42.451988 1582456 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 16:23:42.452139 1582456 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 16:23:42.452276 1582456 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 16:23:42.452413 1582456 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 16:23:42.452506 1582456 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 16:23:42.452629 1582456 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 16:23:42.452675 1582456 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 16:23:42.452725 1582456 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 16:23:42.452728 1582456 kubeadm.go:319] 
	I1213 16:23:42.452793 1582456 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 16:23:42.452796 1582456 kubeadm.go:319] 
	I1213 16:23:42.452879 1582456 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 16:23:42.452883 1582456 kubeadm.go:319] 
	I1213 16:23:42.452910 1582456 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 16:23:42.452974 1582456 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 16:23:42.453028 1582456 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 16:23:42.453031 1582456 kubeadm.go:319] 
	I1213 16:23:42.453089 1582456 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 16:23:42.453093 1582456 kubeadm.go:319] 
	I1213 16:23:42.453144 1582456 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 16:23:42.453148 1582456 kubeadm.go:319] 
	I1213 16:23:42.453204 1582456 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 16:23:42.453285 1582456 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 16:23:42.453368 1582456 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 16:23:42.453372 1582456 kubeadm.go:319] 
	I1213 16:23:42.453465 1582456 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 16:23:42.453547 1582456 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 16:23:42.453550 1582456 kubeadm.go:319] 
	I1213 16:23:42.453640 1582456 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tauua8.mto0imkdlzq3ee4p \
	I1213 16:23:42.453751 1582456 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:590ff7d5a34ba2f13bc4446ba280674514ec0440f2cd73335e75879dbf7fc61d \
	I1213 16:23:42.453773 1582456 kubeadm.go:319] 	--control-plane 
	I1213 16:23:42.453785 1582456 kubeadm.go:319] 
	I1213 16:23:42.453877 1582456 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 16:23:42.453881 1582456 kubeadm.go:319] 
	I1213 16:23:42.453969 1582456 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tauua8.mto0imkdlzq3ee4p \
	I1213 16:23:42.454094 1582456 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:590ff7d5a34ba2f13bc4446ba280674514ec0440f2cd73335e75879dbf7fc61d 
	I1213 16:23:42.454103 1582456 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1213 16:23:42.457295 1582456 out.go:179] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I1213 16:23:42.460217 1582456 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 16:23:42.460296 1582456 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1213 16:23:42.464137 1582456 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1213 16:23:42.464169 1582456 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4578 bytes)
	I1213 16:23:42.483946 1582456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 16:23:42.998073 1582456 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 16:23:42.998211 1582456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 16:23:42.998283 1582456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-023791 minikube.k8s.io/updated_at=2025_12_13T16_23_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7 minikube.k8s.io/name=custom-flannel-023791 minikube.k8s.io/primary=true
	I1213 16:23:43.205973 1582456 ops.go:34] apiserver oom_adj: -16
	I1213 16:23:43.206075 1582456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 16:23:43.706942 1582456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 16:23:44.206776 1582456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 16:23:44.706496 1582456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 16:23:45.206966 1582456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 16:23:45.706655 1582456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 16:23:46.206854 1582456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 16:23:46.304980 1582456 kubeadm.go:1114] duration metric: took 3.306823465s to wait for elevateKubeSystemPrivileges
	I1213 16:23:46.305018 1582456 kubeadm.go:403] duration metric: took 21.800896746s to StartCluster
	I1213 16:23:46.305045 1582456 settings.go:142] acquiring lock: {Name:mk3e38eb9635dd950ddf8081cc0598a8c10049c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:23:46.305128 1582456 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 16:23:46.306077 1582456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/kubeconfig: {Name:mkda740efb91ce95f4f7a29196e84ffffb0974e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 16:23:46.306304 1582456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 16:23:46.306313 1582456 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 16:23:46.306562 1582456 config.go:182] Loaded profile config "custom-flannel-023791": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 16:23:46.306603 1582456 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 16:23:46.306664 1582456 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-023791"
	I1213 16:23:46.306678 1582456 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-023791"
	I1213 16:23:46.306698 1582456 host.go:66] Checking if "custom-flannel-023791" exists ...
	I1213 16:23:46.307167 1582456 cli_runner.go:164] Run: docker container inspect custom-flannel-023791 --format={{.State.Status}}
	I1213 16:23:46.307768 1582456 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-023791"
	I1213 16:23:46.307787 1582456 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-023791"
	I1213 16:23:46.308059 1582456 cli_runner.go:164] Run: docker container inspect custom-flannel-023791 --format={{.State.Status}}
	I1213 16:23:46.312047 1582456 out.go:179] * Verifying Kubernetes components...
	I1213 16:23:46.314860 1582456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 16:23:46.355658 1582456 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-023791"
	I1213 16:23:46.355696 1582456 host.go:66] Checking if "custom-flannel-023791" exists ...
	I1213 16:23:46.356636 1582456 cli_runner.go:164] Run: docker container inspect custom-flannel-023791 --format={{.State.Status}}
	I1213 16:23:46.367423 1582456 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 16:23:46.370349 1582456 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:23:46.370371 1582456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 16:23:46.370438 1582456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-023791
	I1213 16:23:46.401113 1582456 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 16:23:46.401140 1582456 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 16:23:46.401209 1582456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-023791
	I1213 16:23:46.420975 1582456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/custom-flannel-023791/id_rsa Username:docker}
	I1213 16:23:46.440936 1582456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/custom-flannel-023791/id_rsa Username:docker}
	I1213 16:23:46.521685 1582456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 16:23:46.574889 1582456 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 16:23:46.710110 1582456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 16:23:46.804736 1582456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 16:23:47.190499 1582456 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1213 16:23:47.193330 1582456 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-023791" to be "Ready" ...
	I1213 16:23:47.695549 1582456 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-023791" context rescaled to 1 replicas
	I1213 16:23:47.739085 1582456 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1213 16:23:47.742970 1582456 addons.go:530] duration metric: took 1.436358105s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1213 16:23:49.196050 1582456 node_ready.go:57] node "custom-flannel-023791" has "Ready":"False" status (will retry)
	I1213 16:23:50.699085 1582456 node_ready.go:49] node "custom-flannel-023791" is "Ready"
	I1213 16:23:50.699119 1582456 node_ready.go:38] duration metric: took 3.50575904s for node "custom-flannel-023791" to be "Ready" ...
	I1213 16:23:50.699134 1582456 api_server.go:52] waiting for apiserver process to appear ...
	I1213 16:23:50.699239 1582456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 16:23:50.729427 1582456 api_server.go:72] duration metric: took 4.423083251s to wait for apiserver process to appear ...
	I1213 16:23:50.729451 1582456 api_server.go:88] waiting for apiserver healthz status ...
	I1213 16:23:50.729472 1582456 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 16:23:50.738266 1582456 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1213 16:23:50.743078 1582456 api_server.go:141] control plane version: v1.34.2
	I1213 16:23:50.743109 1582456 api_server.go:131] duration metric: took 13.651014ms to wait for apiserver health ...
	I1213 16:23:50.743171 1582456 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 16:23:50.753452 1582456 system_pods.go:59] 7 kube-system pods found
	I1213 16:23:50.753484 1582456 system_pods.go:61] "coredns-66bc5c9577-jhpvm" [a1f0b508-95ed-4e0a-8ee6-c7002d4f1eeb] Pending
	I1213 16:23:50.753495 1582456 system_pods.go:61] "etcd-custom-flannel-023791" [51ad5b5b-7ece-4beb-a975-bc2d335a9214] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 16:23:50.753525 1582456 system_pods.go:61] "kube-apiserver-custom-flannel-023791" [46bc293b-5ec4-4c56-861c-a81845984c72] Running
	I1213 16:23:50.753538 1582456 system_pods.go:61] "kube-controller-manager-custom-flannel-023791" [be67e8c3-2f16-4590-a4c1-23a785cdc9eb] Running
	I1213 16:23:50.753543 1582456 system_pods.go:61] "kube-proxy-s6z9r" [6f268ded-79b0-4c23-aa45-54437fe2aafa] Running
	I1213 16:23:50.753547 1582456 system_pods.go:61] "kube-scheduler-custom-flannel-023791" [a2f294ec-04e3-4b6d-8a2a-6cdab770947f] Running
	I1213 16:23:50.753553 1582456 system_pods.go:61] "storage-provisioner" [c343f559-9e39-4880-aab6-39aee19da238] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 16:23:50.753570 1582456 system_pods.go:74] duration metric: took 10.387711ms to wait for pod list to return data ...
	I1213 16:23:50.753578 1582456 default_sa.go:34] waiting for default service account to be created ...
	I1213 16:23:50.758321 1582456 default_sa.go:45] found service account: "default"
	I1213 16:23:50.758350 1582456 default_sa.go:55] duration metric: took 4.762633ms for default service account to be created ...
	I1213 16:23:50.758361 1582456 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 16:23:50.764854 1582456 system_pods.go:86] 7 kube-system pods found
	I1213 16:23:50.764892 1582456 system_pods.go:89] "coredns-66bc5c9577-jhpvm" [a1f0b508-95ed-4e0a-8ee6-c7002d4f1eeb] Pending
	I1213 16:23:50.764901 1582456 system_pods.go:89] "etcd-custom-flannel-023791" [51ad5b5b-7ece-4beb-a975-bc2d335a9214] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 16:23:50.764929 1582456 system_pods.go:89] "kube-apiserver-custom-flannel-023791" [46bc293b-5ec4-4c56-861c-a81845984c72] Running
	I1213 16:23:50.764944 1582456 system_pods.go:89] "kube-controller-manager-custom-flannel-023791" [be67e8c3-2f16-4590-a4c1-23a785cdc9eb] Running
	I1213 16:23:50.764949 1582456 system_pods.go:89] "kube-proxy-s6z9r" [6f268ded-79b0-4c23-aa45-54437fe2aafa] Running
	I1213 16:23:50.764954 1582456 system_pods.go:89] "kube-scheduler-custom-flannel-023791" [a2f294ec-04e3-4b6d-8a2a-6cdab770947f] Running
	I1213 16:23:50.764968 1582456 system_pods.go:89] "storage-provisioner" [c343f559-9e39-4880-aab6-39aee19da238] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 16:23:50.765011 1582456 retry.go:31] will retry after 280.514502ms: missing components: kube-dns
	I1213 16:23:51.055649 1582456 system_pods.go:86] 7 kube-system pods found
	I1213 16:23:51.055687 1582456 system_pods.go:89] "coredns-66bc5c9577-jhpvm" [a1f0b508-95ed-4e0a-8ee6-c7002d4f1eeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 16:23:51.055726 1582456 system_pods.go:89] "etcd-custom-flannel-023791" [51ad5b5b-7ece-4beb-a975-bc2d335a9214] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 16:23:51.055740 1582456 system_pods.go:89] "kube-apiserver-custom-flannel-023791" [46bc293b-5ec4-4c56-861c-a81845984c72] Running
	I1213 16:23:51.055748 1582456 system_pods.go:89] "kube-controller-manager-custom-flannel-023791" [be67e8c3-2f16-4590-a4c1-23a785cdc9eb] Running
	I1213 16:23:51.055757 1582456 system_pods.go:89] "kube-proxy-s6z9r" [6f268ded-79b0-4c23-aa45-54437fe2aafa] Running
	I1213 16:23:51.055761 1582456 system_pods.go:89] "kube-scheduler-custom-flannel-023791" [a2f294ec-04e3-4b6d-8a2a-6cdab770947f] Running
	I1213 16:23:51.055768 1582456 system_pods.go:89] "storage-provisioner" [c343f559-9e39-4880-aab6-39aee19da238] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 16:23:51.055799 1582456 retry.go:31] will retry after 330.333997ms: missing components: kube-dns
	I1213 16:23:51.391461 1582456 system_pods.go:86] 7 kube-system pods found
	I1213 16:23:51.391546 1582456 system_pods.go:89] "coredns-66bc5c9577-jhpvm" [a1f0b508-95ed-4e0a-8ee6-c7002d4f1eeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 16:23:51.391562 1582456 system_pods.go:89] "etcd-custom-flannel-023791" [51ad5b5b-7ece-4beb-a975-bc2d335a9214] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 16:23:51.391569 1582456 system_pods.go:89] "kube-apiserver-custom-flannel-023791" [46bc293b-5ec4-4c56-861c-a81845984c72] Running
	I1213 16:23:51.391579 1582456 system_pods.go:89] "kube-controller-manager-custom-flannel-023791" [be67e8c3-2f16-4590-a4c1-23a785cdc9eb] Running
	I1213 16:23:51.391583 1582456 system_pods.go:89] "kube-proxy-s6z9r" [6f268ded-79b0-4c23-aa45-54437fe2aafa] Running
	I1213 16:23:51.391588 1582456 system_pods.go:89] "kube-scheduler-custom-flannel-023791" [a2f294ec-04e3-4b6d-8a2a-6cdab770947f] Running
	I1213 16:23:51.391606 1582456 system_pods.go:89] "storage-provisioner" [c343f559-9e39-4880-aab6-39aee19da238] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 16:23:51.391636 1582456 retry.go:31] will retry after 407.715006ms: missing components: kube-dns
	I1213 16:23:51.803437 1582456 system_pods.go:86] 7 kube-system pods found
	I1213 16:23:51.803471 1582456 system_pods.go:89] "coredns-66bc5c9577-jhpvm" [a1f0b508-95ed-4e0a-8ee6-c7002d4f1eeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 16:23:51.803480 1582456 system_pods.go:89] "etcd-custom-flannel-023791" [51ad5b5b-7ece-4beb-a975-bc2d335a9214] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 16:23:51.803487 1582456 system_pods.go:89] "kube-apiserver-custom-flannel-023791" [46bc293b-5ec4-4c56-861c-a81845984c72] Running
	I1213 16:23:51.803492 1582456 system_pods.go:89] "kube-controller-manager-custom-flannel-023791" [be67e8c3-2f16-4590-a4c1-23a785cdc9eb] Running
	I1213 16:23:51.803497 1582456 system_pods.go:89] "kube-proxy-s6z9r" [6f268ded-79b0-4c23-aa45-54437fe2aafa] Running
	I1213 16:23:51.803501 1582456 system_pods.go:89] "kube-scheduler-custom-flannel-023791" [a2f294ec-04e3-4b6d-8a2a-6cdab770947f] Running
	I1213 16:23:51.803507 1582456 system_pods.go:89] "storage-provisioner" [c343f559-9e39-4880-aab6-39aee19da238] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 16:23:51.803521 1582456 retry.go:31] will retry after 578.884349ms: missing components: kube-dns
	I1213 16:23:52.386867 1582456 system_pods.go:86] 7 kube-system pods found
	I1213 16:23:52.386899 1582456 system_pods.go:89] "coredns-66bc5c9577-jhpvm" [a1f0b508-95ed-4e0a-8ee6-c7002d4f1eeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 16:23:52.386906 1582456 system_pods.go:89] "etcd-custom-flannel-023791" [51ad5b5b-7ece-4beb-a975-bc2d335a9214] Running
	I1213 16:23:52.386913 1582456 system_pods.go:89] "kube-apiserver-custom-flannel-023791" [46bc293b-5ec4-4c56-861c-a81845984c72] Running
	I1213 16:23:52.386919 1582456 system_pods.go:89] "kube-controller-manager-custom-flannel-023791" [be67e8c3-2f16-4590-a4c1-23a785cdc9eb] Running
	I1213 16:23:52.386923 1582456 system_pods.go:89] "kube-proxy-s6z9r" [6f268ded-79b0-4c23-aa45-54437fe2aafa] Running
	I1213 16:23:52.386927 1582456 system_pods.go:89] "kube-scheduler-custom-flannel-023791" [a2f294ec-04e3-4b6d-8a2a-6cdab770947f] Running
	I1213 16:23:52.386931 1582456 system_pods.go:89] "storage-provisioner" [c343f559-9e39-4880-aab6-39aee19da238] Running
	I1213 16:23:52.386946 1582456 retry.go:31] will retry after 606.617692ms: missing components: kube-dns
	I1213 16:23:52.996863 1582456 system_pods.go:86] 7 kube-system pods found
	I1213 16:23:52.996900 1582456 system_pods.go:89] "coredns-66bc5c9577-jhpvm" [a1f0b508-95ed-4e0a-8ee6-c7002d4f1eeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 16:23:52.996909 1582456 system_pods.go:89] "etcd-custom-flannel-023791" [51ad5b5b-7ece-4beb-a975-bc2d335a9214] Running
	I1213 16:23:52.996915 1582456 system_pods.go:89] "kube-apiserver-custom-flannel-023791" [46bc293b-5ec4-4c56-861c-a81845984c72] Running
	I1213 16:23:52.996920 1582456 system_pods.go:89] "kube-controller-manager-custom-flannel-023791" [be67e8c3-2f16-4590-a4c1-23a785cdc9eb] Running
	I1213 16:23:52.996924 1582456 system_pods.go:89] "kube-proxy-s6z9r" [6f268ded-79b0-4c23-aa45-54437fe2aafa] Running
	I1213 16:23:52.996928 1582456 system_pods.go:89] "kube-scheduler-custom-flannel-023791" [a2f294ec-04e3-4b6d-8a2a-6cdab770947f] Running
	I1213 16:23:52.996932 1582456 system_pods.go:89] "storage-provisioner" [c343f559-9e39-4880-aab6-39aee19da238] Running
	I1213 16:23:52.996947 1582456 retry.go:31] will retry after 611.266158ms: missing components: kube-dns
	I1213 16:23:53.612314 1582456 system_pods.go:86] 7 kube-system pods found
	I1213 16:23:53.612348 1582456 system_pods.go:89] "coredns-66bc5c9577-jhpvm" [a1f0b508-95ed-4e0a-8ee6-c7002d4f1eeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 16:23:53.612356 1582456 system_pods.go:89] "etcd-custom-flannel-023791" [51ad5b5b-7ece-4beb-a975-bc2d335a9214] Running
	I1213 16:23:53.612363 1582456 system_pods.go:89] "kube-apiserver-custom-flannel-023791" [46bc293b-5ec4-4c56-861c-a81845984c72] Running
	I1213 16:23:53.612368 1582456 system_pods.go:89] "kube-controller-manager-custom-flannel-023791" [be67e8c3-2f16-4590-a4c1-23a785cdc9eb] Running
	I1213 16:23:53.612372 1582456 system_pods.go:89] "kube-proxy-s6z9r" [6f268ded-79b0-4c23-aa45-54437fe2aafa] Running
	I1213 16:23:53.612377 1582456 system_pods.go:89] "kube-scheduler-custom-flannel-023791" [a2f294ec-04e3-4b6d-8a2a-6cdab770947f] Running
	I1213 16:23:53.612392 1582456 system_pods.go:89] "storage-provisioner" [c343f559-9e39-4880-aab6-39aee19da238] Running
	I1213 16:23:53.612407 1582456 retry.go:31] will retry after 867.71226ms: missing components: kube-dns
	I1213 16:23:54.483437 1582456 system_pods.go:86] 7 kube-system pods found
	I1213 16:23:54.483470 1582456 system_pods.go:89] "coredns-66bc5c9577-jhpvm" [a1f0b508-95ed-4e0a-8ee6-c7002d4f1eeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 16:23:54.483477 1582456 system_pods.go:89] "etcd-custom-flannel-023791" [51ad5b5b-7ece-4beb-a975-bc2d335a9214] Running
	I1213 16:23:54.483483 1582456 system_pods.go:89] "kube-apiserver-custom-flannel-023791" [46bc293b-5ec4-4c56-861c-a81845984c72] Running
	I1213 16:23:54.483488 1582456 system_pods.go:89] "kube-controller-manager-custom-flannel-023791" [be67e8c3-2f16-4590-a4c1-23a785cdc9eb] Running
	I1213 16:23:54.483492 1582456 system_pods.go:89] "kube-proxy-s6z9r" [6f268ded-79b0-4c23-aa45-54437fe2aafa] Running
	I1213 16:23:54.483496 1582456 system_pods.go:89] "kube-scheduler-custom-flannel-023791" [a2f294ec-04e3-4b6d-8a2a-6cdab770947f] Running
	I1213 16:23:54.483500 1582456 system_pods.go:89] "storage-provisioner" [c343f559-9e39-4880-aab6-39aee19da238] Running
	I1213 16:23:54.483514 1582456 retry.go:31] will retry after 1.19596534s: missing components: kube-dns
	I1213 16:23:55.683169 1582456 system_pods.go:86] 7 kube-system pods found
	I1213 16:23:55.683205 1582456 system_pods.go:89] "coredns-66bc5c9577-jhpvm" [a1f0b508-95ed-4e0a-8ee6-c7002d4f1eeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 16:23:55.683213 1582456 system_pods.go:89] "etcd-custom-flannel-023791" [51ad5b5b-7ece-4beb-a975-bc2d335a9214] Running
	I1213 16:23:55.683219 1582456 system_pods.go:89] "kube-apiserver-custom-flannel-023791" [46bc293b-5ec4-4c56-861c-a81845984c72] Running
	I1213 16:23:55.683224 1582456 system_pods.go:89] "kube-controller-manager-custom-flannel-023791" [be67e8c3-2f16-4590-a4c1-23a785cdc9eb] Running
	I1213 16:23:55.683268 1582456 system_pods.go:89] "kube-proxy-s6z9r" [6f268ded-79b0-4c23-aa45-54437fe2aafa] Running
	I1213 16:23:55.683280 1582456 system_pods.go:89] "kube-scheduler-custom-flannel-023791" [a2f294ec-04e3-4b6d-8a2a-6cdab770947f] Running
	I1213 16:23:55.683284 1582456 system_pods.go:89] "storage-provisioner" [c343f559-9e39-4880-aab6-39aee19da238] Running
	I1213 16:23:55.683298 1582456 retry.go:31] will retry after 1.338841711s: missing components: kube-dns
	I1213 16:23:57.026208 1582456 system_pods.go:86] 7 kube-system pods found
	I1213 16:23:57.026247 1582456 system_pods.go:89] "coredns-66bc5c9577-jhpvm" [a1f0b508-95ed-4e0a-8ee6-c7002d4f1eeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 16:23:57.026255 1582456 system_pods.go:89] "etcd-custom-flannel-023791" [51ad5b5b-7ece-4beb-a975-bc2d335a9214] Running
	I1213 16:23:57.026262 1582456 system_pods.go:89] "kube-apiserver-custom-flannel-023791" [46bc293b-5ec4-4c56-861c-a81845984c72] Running
	I1213 16:23:57.026267 1582456 system_pods.go:89] "kube-controller-manager-custom-flannel-023791" [be67e8c3-2f16-4590-a4c1-23a785cdc9eb] Running
	I1213 16:23:57.026272 1582456 system_pods.go:89] "kube-proxy-s6z9r" [6f268ded-79b0-4c23-aa45-54437fe2aafa] Running
	I1213 16:23:57.026277 1582456 system_pods.go:89] "kube-scheduler-custom-flannel-023791" [a2f294ec-04e3-4b6d-8a2a-6cdab770947f] Running
	I1213 16:23:57.026281 1582456 system_pods.go:89] "storage-provisioner" [c343f559-9e39-4880-aab6-39aee19da238] Running
	I1213 16:23:57.026296 1582456 retry.go:31] will retry after 1.79115326s: missing components: kube-dns
	I1213 16:23:58.821154 1582456 system_pods.go:86] 7 kube-system pods found
	I1213 16:23:58.821205 1582456 system_pods.go:89] "coredns-66bc5c9577-jhpvm" [a1f0b508-95ed-4e0a-8ee6-c7002d4f1eeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 16:23:58.821213 1582456 system_pods.go:89] "etcd-custom-flannel-023791" [51ad5b5b-7ece-4beb-a975-bc2d335a9214] Running
	I1213 16:23:58.821220 1582456 system_pods.go:89] "kube-apiserver-custom-flannel-023791" [46bc293b-5ec4-4c56-861c-a81845984c72] Running
	I1213 16:23:58.821225 1582456 system_pods.go:89] "kube-controller-manager-custom-flannel-023791" [be67e8c3-2f16-4590-a4c1-23a785cdc9eb] Running
	I1213 16:23:58.821229 1582456 system_pods.go:89] "kube-proxy-s6z9r" [6f268ded-79b0-4c23-aa45-54437fe2aafa] Running
	I1213 16:23:58.821234 1582456 system_pods.go:89] "kube-scheduler-custom-flannel-023791" [a2f294ec-04e3-4b6d-8a2a-6cdab770947f] Running
	I1213 16:23:58.821238 1582456 system_pods.go:89] "storage-provisioner" [c343f559-9e39-4880-aab6-39aee19da238] Running
	I1213 16:23:58.821254 1582456 retry.go:31] will retry after 2.564456262s: missing components: kube-dns
	I1213 16:24:01.390475 1582456 system_pods.go:86] 7 kube-system pods found
	I1213 16:24:01.390513 1582456 system_pods.go:89] "coredns-66bc5c9577-jhpvm" [a1f0b508-95ed-4e0a-8ee6-c7002d4f1eeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 16:24:01.390533 1582456 system_pods.go:89] "etcd-custom-flannel-023791" [51ad5b5b-7ece-4beb-a975-bc2d335a9214] Running
	I1213 16:24:01.390540 1582456 system_pods.go:89] "kube-apiserver-custom-flannel-023791" [46bc293b-5ec4-4c56-861c-a81845984c72] Running
	I1213 16:24:01.390545 1582456 system_pods.go:89] "kube-controller-manager-custom-flannel-023791" [be67e8c3-2f16-4590-a4c1-23a785cdc9eb] Running
	I1213 16:24:01.390549 1582456 system_pods.go:89] "kube-proxy-s6z9r" [6f268ded-79b0-4c23-aa45-54437fe2aafa] Running
	I1213 16:24:01.390553 1582456 system_pods.go:89] "kube-scheduler-custom-flannel-023791" [a2f294ec-04e3-4b6d-8a2a-6cdab770947f] Running
	I1213 16:24:01.390563 1582456 system_pods.go:89] "storage-provisioner" [c343f559-9e39-4880-aab6-39aee19da238] Running
	I1213 16:24:01.390577 1582456 retry.go:31] will retry after 2.748525782s: missing components: kube-dns
	I1213 16:24:04.143376 1582456 system_pods.go:86] 7 kube-system pods found
	I1213 16:24:04.143412 1582456 system_pods.go:89] "coredns-66bc5c9577-jhpvm" [a1f0b508-95ed-4e0a-8ee6-c7002d4f1eeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 16:24:04.143426 1582456 system_pods.go:89] "etcd-custom-flannel-023791" [51ad5b5b-7ece-4beb-a975-bc2d335a9214] Running
	I1213 16:24:04.143434 1582456 system_pods.go:89] "kube-apiserver-custom-flannel-023791" [46bc293b-5ec4-4c56-861c-a81845984c72] Running
	I1213 16:24:04.143440 1582456 system_pods.go:89] "kube-controller-manager-custom-flannel-023791" [be67e8c3-2f16-4590-a4c1-23a785cdc9eb] Running
	I1213 16:24:04.143444 1582456 system_pods.go:89] "kube-proxy-s6z9r" [6f268ded-79b0-4c23-aa45-54437fe2aafa] Running
	I1213 16:24:04.143448 1582456 system_pods.go:89] "kube-scheduler-custom-flannel-023791" [a2f294ec-04e3-4b6d-8a2a-6cdab770947f] Running
	I1213 16:24:04.143456 1582456 system_pods.go:89] "storage-provisioner" [c343f559-9e39-4880-aab6-39aee19da238] Running
	I1213 16:24:04.143472 1582456 retry.go:31] will retry after 4.036014667s: missing components: kube-dns
	I1213 16:24:08.183230 1582456 system_pods.go:86] 7 kube-system pods found
	I1213 16:24:08.183263 1582456 system_pods.go:89] "coredns-66bc5c9577-jhpvm" [a1f0b508-95ed-4e0a-8ee6-c7002d4f1eeb] Running
	I1213 16:24:08.183270 1582456 system_pods.go:89] "etcd-custom-flannel-023791" [51ad5b5b-7ece-4beb-a975-bc2d335a9214] Running
	I1213 16:24:08.183275 1582456 system_pods.go:89] "kube-apiserver-custom-flannel-023791" [46bc293b-5ec4-4c56-861c-a81845984c72] Running
	I1213 16:24:08.183279 1582456 system_pods.go:89] "kube-controller-manager-custom-flannel-023791" [be67e8c3-2f16-4590-a4c1-23a785cdc9eb] Running
	I1213 16:24:08.183283 1582456 system_pods.go:89] "kube-proxy-s6z9r" [6f268ded-79b0-4c23-aa45-54437fe2aafa] Running
	I1213 16:24:08.183287 1582456 system_pods.go:89] "kube-scheduler-custom-flannel-023791" [a2f294ec-04e3-4b6d-8a2a-6cdab770947f] Running
	I1213 16:24:08.183291 1582456 system_pods.go:89] "storage-provisioner" [c343f559-9e39-4880-aab6-39aee19da238] Running
	I1213 16:24:08.183299 1582456 system_pods.go:126] duration metric: took 17.424912042s to wait for k8s-apps to be running ...
	I1213 16:24:08.183335 1582456 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 16:24:08.183402 1582456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 16:24:08.196721 1582456 system_svc.go:56] duration metric: took 13.405382ms WaitForService to wait for kubelet
	I1213 16:24:08.196754 1582456 kubeadm.go:587] duration metric: took 21.890414876s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 16:24:08.196773 1582456 node_conditions.go:102] verifying NodePressure condition ...
	I1213 16:24:08.199528 1582456 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 16:24:08.199579 1582456 node_conditions.go:123] node cpu capacity is 2
	I1213 16:24:08.199595 1582456 node_conditions.go:105] duration metric: took 2.81615ms to run NodePressure ...
	I1213 16:24:08.199608 1582456 start.go:242] waiting for startup goroutines ...
	I1213 16:24:08.199616 1582456 start.go:247] waiting for cluster config update ...
	I1213 16:24:08.199628 1582456 start.go:256] writing updated cluster config ...
	I1213 16:24:08.199930 1582456 ssh_runner.go:195] Run: rm -f paused
	I1213 16:24:08.203932 1582456 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 16:24:08.207623 1582456 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jhpvm" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 16:24:08.212511 1582456 pod_ready.go:94] pod "coredns-66bc5c9577-jhpvm" is "Ready"
	I1213 16:24:08.212597 1582456 pod_ready.go:86] duration metric: took 4.940171ms for pod "coredns-66bc5c9577-jhpvm" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 16:24:08.215077 1582456 pod_ready.go:83] waiting for pod "etcd-custom-flannel-023791" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 16:24:08.219796 1582456 pod_ready.go:94] pod "etcd-custom-flannel-023791" is "Ready"
	I1213 16:24:08.219825 1582456 pod_ready.go:86] duration metric: took 4.718219ms for pod "etcd-custom-flannel-023791" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 16:24:08.222329 1582456 pod_ready.go:83] waiting for pod "kube-apiserver-custom-flannel-023791" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 16:24:08.227866 1582456 pod_ready.go:94] pod "kube-apiserver-custom-flannel-023791" is "Ready"
	I1213 16:24:08.227892 1582456 pod_ready.go:86] duration metric: took 5.536588ms for pod "kube-apiserver-custom-flannel-023791" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 16:24:08.230200 1582456 pod_ready.go:83] waiting for pod "kube-controller-manager-custom-flannel-023791" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 16:24:08.608455 1582456 pod_ready.go:94] pod "kube-controller-manager-custom-flannel-023791" is "Ready"
	I1213 16:24:08.608546 1582456 pod_ready.go:86] duration metric: took 378.307357ms for pod "kube-controller-manager-custom-flannel-023791" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 16:24:08.807744 1582456 pod_ready.go:83] waiting for pod "kube-proxy-s6z9r" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 16:24:09.208507 1582456 pod_ready.go:94] pod "kube-proxy-s6z9r" is "Ready"
	I1213 16:24:09.208535 1582456 pod_ready.go:86] duration metric: took 400.765267ms for pod "kube-proxy-s6z9r" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 16:24:09.409041 1582456 pod_ready.go:83] waiting for pod "kube-scheduler-custom-flannel-023791" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 16:24:09.808207 1582456 pod_ready.go:94] pod "kube-scheduler-custom-flannel-023791" is "Ready"
	I1213 16:24:09.808240 1582456 pod_ready.go:86] duration metric: took 399.170908ms for pod "kube-scheduler-custom-flannel-023791" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 16:24:09.808253 1582456 pod_ready.go:40] duration metric: took 1.604286781s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 16:24:09.864557 1582456 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1213 16:24:09.868031 1582456 out.go:179] * Done! kubectl is now configured to use "custom-flannel-023791" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216398345Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216470499Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216572930Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216649974Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216720996Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216786135Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216843479Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216912187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.216985974Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.217088848Z" level=info msg="Connect containerd service"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.217463198Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.218120758Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.231205084Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.231272274Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.231345659Z" level=info msg="Start subscribing containerd event"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.231396062Z" level=info msg="Start recovering state"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.254976526Z" level=info msg="Start event monitor"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.255192266Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.255261828Z" level=info msg="Start streaming server"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.255422735Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.255487619Z" level=info msg="runtime interface starting up..."
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.255541731Z" level=info msg="starting plugins..."
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.255628375Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 16:04:48 no-preload-439544 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 16:04:48 no-preload-439544 containerd[556]: time="2025-12-13T16:04:48.257678755Z" level=info msg="containerd successfully booted in 0.068392s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 16:24:12.464972   10251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:24:12.465794   10251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:24:12.467598   10251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:24:12.467904   10251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 16:24:12.469456   10251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 13:38] overlayfs: idmapped layers are currently not supported
	[Dec13 13:39] overlayfs: idmapped layers are currently not supported
	[Dec13 13:40] overlayfs: idmapped layers are currently not supported
	[Dec13 13:42] overlayfs: idmapped layers are currently not supported
	[Dec13 13:44] overlayfs: idmapped layers are currently not supported
	[Dec13 13:55] overlayfs: idmapped layers are currently not supported
	[Dec13 13:57] overlayfs: idmapped layers are currently not supported
	[ +37.486494] overlayfs: idmapped layers are currently not supported
	[  +5.749635] overlayfs: idmapped layers are currently not supported
	[Dec13 13:58] overlayfs: idmapped layers are currently not supported
	[Dec13 13:59] overlayfs: idmapped layers are currently not supported
	[Dec13 14:00] overlayfs: idmapped layers are currently not supported
	[Dec13 14:01] overlayfs: idmapped layers are currently not supported
	[ +10.745175] overlayfs: idmapped layers are currently not supported
	[Dec13 14:03] overlayfs: idmapped layers are currently not supported
	[ +10.655903] overlayfs: idmapped layers are currently not supported
	[Dec13 14:04] overlayfs: idmapped layers are currently not supported
	[Dec13 14:21] overlayfs: idmapped layers are currently not supported
	[Dec13 14:23] overlayfs: idmapped layers are currently not supported
	[Dec13 14:25] overlayfs: idmapped layers are currently not supported
	[Dec13 14:27] overlayfs: idmapped layers are currently not supported
	[Dec13 14:28] overlayfs: idmapped layers are currently not supported
	[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 16:24:12 up  8:06,  0 user,  load average: 1.99, 1.86, 1.47
	Linux no-preload-439544 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 16:24:09 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:24:10 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1547.
	Dec 13 16:24:10 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:24:10 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:24:10 no-preload-439544 kubelet[10117]: E1213 16:24:10.168351   10117 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:24:10 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:24:10 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:24:10 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1548.
	Dec 13 16:24:10 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:24:10 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:24:10 no-preload-439544 kubelet[10123]: E1213 16:24:10.911287   10123 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:24:10 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:24:10 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:24:11 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1549.
	Dec 13 16:24:11 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:24:11 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:24:11 no-preload-439544 kubelet[10158]: E1213 16:24:11.695222   10158 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:24:11 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:24:11 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 16:24:12 no-preload-439544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1550.
	Dec 13 16:24:12 no-preload-439544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:24:12 no-preload-439544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 16:24:12 no-preload-439544 kubelet[10244]: E1213 16:24:12.405114   10244 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 16:24:12 no-preload-439544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 16:24:12 no-preload-439544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-439544 -n no-preload-439544
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-439544 -n no-preload-439544: exit status 2 (420.500177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-439544" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (258.50s)
E1213 16:27:12.415594 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:27:14.155699 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:27:17.569087 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/flannel-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:27:31.463851 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/calico-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:27:31.470311 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/calico-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:27:31.481757 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/calico-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:27:31.503200 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/calico-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:27:31.544581 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/calico-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:27:31.626038 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/calico-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:27:31.788248 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/calico-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:27:32.109657 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/calico-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:27:32.751872 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/calico-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:27:34.033504 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/calico-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:27:36.594909 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/calico-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:27:41.716537 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/calico-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (345/417)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.48
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 5.33
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.19
18 TestDownloadOnly/v1.34.2/DeleteAll 0.22
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 4.37
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.22
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.62
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 153.91
38 TestAddons/serial/Volcano 40.94
40 TestAddons/serial/GCPAuth/Namespaces 0.17
41 TestAddons/serial/GCPAuth/FakeCredentials 9.83
44 TestAddons/parallel/Registry 15.45
45 TestAddons/parallel/RegistryCreds 0.77
46 TestAddons/parallel/Ingress 18.91
47 TestAddons/parallel/InspektorGadget 11.8
48 TestAddons/parallel/MetricsServer 6.98
50 TestAddons/parallel/CSI 41.76
51 TestAddons/parallel/Headlamp 17.37
52 TestAddons/parallel/CloudSpanner 6.86
53 TestAddons/parallel/LocalPath 52.81
54 TestAddons/parallel/NvidiaDevicePlugin 6.21
55 TestAddons/parallel/Yakd 11.87
57 TestAddons/StoppedEnableDisable 12.37
58 TestCertOptions 36.78
59 TestCertExpiration 221.3
61 TestForceSystemdFlag 34.13
62 TestForceSystemdEnv 37.24
63 TestDockerEnvContainerd 49.82
67 TestErrorSpam/setup 31.08
68 TestErrorSpam/start 0.85
69 TestErrorSpam/status 1.12
70 TestErrorSpam/pause 1.73
71 TestErrorSpam/unpause 1.77
72 TestErrorSpam/stop 1.68
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 52.31
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 7
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.09
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.48
84 TestFunctional/serial/CacheCmd/cache/add_local 1.21
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.92
89 TestFunctional/serial/CacheCmd/cache/delete 0.11
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
92 TestFunctional/serial/ExtraConfig 44.83
93 TestFunctional/serial/ComponentHealth 0.1
94 TestFunctional/serial/LogsCmd 1.49
95 TestFunctional/serial/LogsFileCmd 1.53
96 TestFunctional/serial/InvalidService 4.58
98 TestFunctional/parallel/ConfigCmd 0.48
99 TestFunctional/parallel/DashboardCmd 9.39
100 TestFunctional/parallel/DryRun 0.46
101 TestFunctional/parallel/InternationalLanguage 0.2
102 TestFunctional/parallel/StatusCmd 1.2
106 TestFunctional/parallel/ServiceCmdConnect 8.64
107 TestFunctional/parallel/AddonsCmd 0.16
108 TestFunctional/parallel/PersistentVolumeClaim 21.94
110 TestFunctional/parallel/SSHCmd 0.77
111 TestFunctional/parallel/CpCmd 2.42
113 TestFunctional/parallel/FileSync 0.4
114 TestFunctional/parallel/CertSync 2.25
118 TestFunctional/parallel/NodeLabels 0.1
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
122 TestFunctional/parallel/License 0.32
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.71
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.41
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ServiceCmd/DeployApp 7.2
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
136 TestFunctional/parallel/ProfileCmd/profile_list 0.44
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
138 TestFunctional/parallel/MountCmd/any-port 8.33
139 TestFunctional/parallel/ServiceCmd/List 0.53
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
142 TestFunctional/parallel/ServiceCmd/Format 0.4
143 TestFunctional/parallel/ServiceCmd/URL 0.39
144 TestFunctional/parallel/MountCmd/specific-port 2.03
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.96
146 TestFunctional/parallel/Version/short 0.16
147 TestFunctional/parallel/Version/components 1.5
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
152 TestFunctional/parallel/ImageCommands/ImageBuild 4.12
153 TestFunctional/parallel/ImageCommands/Setup 0.64
154 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.32
155 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
156 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
157 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
158 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.27
159 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.7
160 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
161 TestFunctional/parallel/ImageCommands/ImageRemove 0.6
162 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.69
163 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.38
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.41
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.03
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.05
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.32
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.83
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.95
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 0.94
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.46
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.45
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.22
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.15
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.7
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.12
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.35
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 2.18
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.74
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.35
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.07
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.5
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.22
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.24
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.24
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.24
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.53
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.27
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.47
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.5
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.62
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.15
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.19
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.41
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.58
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.87
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.49
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.1
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.38
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.39
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.39
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.97
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.81
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 172.31
265 TestMultiControlPlane/serial/DeployApp 6.84
266 TestMultiControlPlane/serial/PingHostFromPods 1.72
267 TestMultiControlPlane/serial/AddWorkerNode 61.19
268 TestMultiControlPlane/serial/NodeLabels 0.1
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.1
270 TestMultiControlPlane/serial/CopyFile 20.68
271 TestMultiControlPlane/serial/StopSecondaryNode 13.13
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.9
273 TestMultiControlPlane/serial/RestartSecondaryNode 13.6
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.09
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 112.65
276 TestMultiControlPlane/serial/DeleteSecondaryNode 11.11
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.81
278 TestMultiControlPlane/serial/StopCluster 36.45
279 TestMultiControlPlane/serial/RestartCluster 61.76
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.99
281 TestMultiControlPlane/serial/AddSecondaryNode 82.08
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.09
287 TestJSONOutput/start/Command 52.31
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
293 TestJSONOutput/pause/Command 0.78
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
299 TestJSONOutput/unpause/Command 0.65
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.99
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.25
312 TestKicCustomNetwork/create_custom_network 40.28
313 TestKicCustomNetwork/use_default_bridge_network 37.47
314 TestKicExistingNetwork 36.46
315 TestKicCustomSubnet 36.17
316 TestKicStaticIP 36.97
317 TestMainNoArgs 0.06
318 TestMinikubeProfile 73.46
321 TestMountStart/serial/StartWithMountFirst 8.34
322 TestMountStart/serial/VerifyMountFirst 0.29
323 TestMountStart/serial/StartWithMountSecond 8.36
324 TestMountStart/serial/VerifyMountSecond 0.28
325 TestMountStart/serial/DeleteFirst 1.7
326 TestMountStart/serial/VerifyMountPostDelete 0.27
327 TestMountStart/serial/Stop 1.28
328 TestMountStart/serial/RestartStopped 7.49
329 TestMountStart/serial/VerifyMountPostStop 0.29
332 TestMultiNode/serial/FreshStart2Nodes 109.12
333 TestMultiNode/serial/DeployApp2Nodes 5.05
334 TestMultiNode/serial/PingHostFrom2Pods 1.02
335 TestMultiNode/serial/AddNode 27.88
336 TestMultiNode/serial/MultiNodeLabels 0.1
337 TestMultiNode/serial/ProfileList 0.73
338 TestMultiNode/serial/CopyFile 11.01
339 TestMultiNode/serial/StopNode 2.4
340 TestMultiNode/serial/StartAfterStop 8.43
341 TestMultiNode/serial/RestartKeepsNodes 72.79
342 TestMultiNode/serial/DeleteNode 5.68
343 TestMultiNode/serial/StopMultiNode 24.2
344 TestMultiNode/serial/RestartMultiNode 50.99
345 TestMultiNode/serial/ValidateNameConflict 35.59
350 TestPreload 116.41
352 TestScheduledStopUnix 111.02
355 TestInsufficientStorage 12.41
356 TestRunningBinaryUpgrade 315.1
359 TestMissingContainerUpgrade 166
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
362 TestNoKubernetes/serial/StartWithK8s 45.8
363 TestNoKubernetes/serial/StartWithStopK8s 17.57
364 TestNoKubernetes/serial/Start 5.34
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
367 TestNoKubernetes/serial/ProfileList 0.8
368 TestNoKubernetes/serial/Stop 1.32
369 TestNoKubernetes/serial/StartNoArgs 6.97
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
371 TestStoppedBinaryUpgrade/Setup 0.99
372 TestStoppedBinaryUpgrade/Upgrade 53.45
373 TestStoppedBinaryUpgrade/MinikubeLogs 2.38
382 TestPause/serial/Start 81.64
383 TestPause/serial/SecondStartNoReconfiguration 6.18
384 TestPause/serial/Pause 0.73
385 TestPause/serial/VerifyStatus 0.34
386 TestPause/serial/Unpause 0.67
387 TestPause/serial/PauseAgain 0.88
388 TestPause/serial/DeletePaused 2.98
389 TestPause/serial/VerifyDeletedResources 0.38
397 TestNetworkPlugins/group/false 3.73
402 TestStartStop/group/old-k8s-version/serial/FirstStart 65.4
405 TestStartStop/group/old-k8s-version/serial/DeployApp 8.36
406 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.22
407 TestStartStop/group/old-k8s-version/serial/Stop 12.03
408 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
409 TestStartStop/group/old-k8s-version/serial/SecondStart 52.05
410 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
411 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
412 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
413 TestStartStop/group/old-k8s-version/serial/Pause 3.1
415 TestStartStop/group/embed-certs/serial/FirstStart 80.6
416 TestStartStop/group/embed-certs/serial/DeployApp 9.4
417 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.14
418 TestStartStop/group/embed-certs/serial/Stop 12.11
419 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
420 TestStartStop/group/embed-certs/serial/SecondStart 52.26
421 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
422 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.1
423 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
424 TestStartStop/group/embed-certs/serial/Pause 3.24
426 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 78.99
427 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.35
428 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.19
429 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.11
430 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
431 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.18
432 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
433 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.1
434 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
435 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.16
440 TestStartStop/group/no-preload/serial/Stop 1.36
441 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
443 TestStartStop/group/newest-cni/serial/DeployApp 0
446 TestStartStop/group/newest-cni/serial/Stop 1.31
447 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
449 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
450 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
451 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
453 TestNetworkPlugins/group/auto/Start 48.5
454 TestNetworkPlugins/group/auto/KubeletFlags 0.35
455 TestNetworkPlugins/group/auto/NetCatPod 9.27
456 TestNetworkPlugins/group/auto/DNS 0.18
457 TestNetworkPlugins/group/auto/Localhost 0.17
458 TestNetworkPlugins/group/auto/HairPin 0.15
460 TestNetworkPlugins/group/flannel/Start 56.35
461 TestNetworkPlugins/group/flannel/ControllerPod 6
462 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
463 TestNetworkPlugins/group/flannel/NetCatPod 9.26
464 TestNetworkPlugins/group/flannel/DNS 0.19
465 TestNetworkPlugins/group/flannel/Localhost 0.15
466 TestNetworkPlugins/group/flannel/HairPin 0.18
467 TestNetworkPlugins/group/calico/Start 58.19
468 TestNetworkPlugins/group/calico/ControllerPod 6.01
469 TestNetworkPlugins/group/calico/KubeletFlags 0.33
470 TestNetworkPlugins/group/calico/NetCatPod 9.27
471 TestNetworkPlugins/group/calico/DNS 0.2
472 TestNetworkPlugins/group/calico/Localhost 0.14
473 TestNetworkPlugins/group/calico/HairPin 0.15
474 TestNetworkPlugins/group/custom-flannel/Start 59.91
475 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
476 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.29
477 TestNetworkPlugins/group/kindnet/Start 89.33
478 TestNetworkPlugins/group/custom-flannel/DNS 0.18
479 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
480 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
481 TestNetworkPlugins/group/bridge/Start 83.21
482 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
483 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
484 TestNetworkPlugins/group/kindnet/NetCatPod 9.29
485 TestNetworkPlugins/group/kindnet/DNS 0.42
486 TestNetworkPlugins/group/kindnet/Localhost 0.15
487 TestNetworkPlugins/group/kindnet/HairPin 0.16
488 TestNetworkPlugins/group/bridge/KubeletFlags 0.47
489 TestNetworkPlugins/group/bridge/NetCatPod 9.36
490 TestNetworkPlugins/group/bridge/DNS 0.23
491 TestNetworkPlugins/group/bridge/Localhost 0.24
492 TestNetworkPlugins/group/bridge/HairPin 0.21
493 TestNetworkPlugins/group/enable-default-cni/Start 80.68
494 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
495 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.27
496 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
497 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
498 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
x
+
TestDownloadOnly/v1.28.0/json-events (5.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-663089 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-663089 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.480194408s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1213 14:30:55.756294 1252934 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1213 14:30:55.756391 1252934 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-663089
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-663089: exit status 85 (94.943851ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-663089 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-663089 │ jenkins │ v1.37.0 │ 13 Dec 25 14:30 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 14:30:50
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 14:30:50.320769 1252940 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:30:50.320976 1252940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:30:50.321003 1252940 out.go:374] Setting ErrFile to fd 2...
	I1213 14:30:50.321021 1252940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:30:50.321304 1252940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	W1213 14:30:50.321470 1252940 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22122-1251074/.minikube/config/config.json: open /home/jenkins/minikube-integration/22122-1251074/.minikube/config/config.json: no such file or directory
	I1213 14:30:50.321936 1252940 out.go:368] Setting JSON to true
	I1213 14:30:50.322810 1252940 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22399,"bootTime":1765613851,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 14:30:50.322906 1252940 start.go:143] virtualization:  
	I1213 14:30:50.328531 1252940 out.go:99] [download-only-663089] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1213 14:30:50.328738 1252940 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball: no such file or directory
	I1213 14:30:50.328863 1252940 notify.go:221] Checking for updates...
	I1213 14:30:50.333680 1252940 out.go:171] MINIKUBE_LOCATION=22122
	I1213 14:30:50.337165 1252940 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:30:50.340453 1252940 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:30:50.343775 1252940 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 14:30:50.346845 1252940 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1213 14:30:50.352923 1252940 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 14:30:50.353258 1252940 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:30:50.382457 1252940 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 14:30:50.382589 1252940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:30:50.439072 1252940 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-13 14:30:50.43007203 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:30:50.439191 1252940 docker.go:319] overlay module found
	I1213 14:30:50.442395 1252940 out.go:99] Using the docker driver based on user configuration
	I1213 14:30:50.442441 1252940 start.go:309] selected driver: docker
	I1213 14:30:50.442448 1252940 start.go:927] validating driver "docker" against <nil>
	I1213 14:30:50.442568 1252940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:30:50.493462 1252940 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-13 14:30:50.484864351 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:30:50.493615 1252940 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 14:30:50.493903 1252940 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1213 14:30:50.494048 1252940 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 14:30:50.497277 1252940 out.go:171] Using Docker driver with root privileges
	I1213 14:30:50.500375 1252940 cni.go:84] Creating CNI manager for ""
	I1213 14:30:50.500442 1252940 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 14:30:50.500455 1252940 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 14:30:50.500542 1252940 start.go:353] cluster config:
	{Name:download-only-663089 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-663089 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:30:50.503603 1252940 out.go:99] Starting "download-only-663089" primary control-plane node in "download-only-663089" cluster
	I1213 14:30:50.503631 1252940 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 14:30:50.506563 1252940 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1213 14:30:50.506607 1252940 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1213 14:30:50.506766 1252940 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 14:30:50.522304 1252940 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 14:30:50.522503 1252940 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1213 14:30:50.522628 1252940 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 14:30:50.571453 1252940 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1213 14:30:50.571481 1252940 cache.go:65] Caching tarball of preloaded images
	I1213 14:30:50.571767 1252940 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1213 14:30:50.575278 1252940 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1213 14:30:50.575326 1252940 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1213 14:30:50.659019 1252940 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1213 14:30:50.659157 1252940 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1213 14:30:54.852934 1252940 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1213 14:30:54.853370 1252940 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/download-only-663089/config.json ...
	I1213 14:30:54.853422 1252940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/download-only-663089/config.json: {Name:mkb9b1650f46ba8a8c16939fa20bea7d68f90e27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:30:54.853628 1252940 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1213 14:30:54.853884 1252940 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-663089 host does not exist
	  To start a cluster, run: "minikube start -p download-only-663089"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-663089
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (5.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-172760 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-172760 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.325766594s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (5.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1213 14:31:01.551346 1252934 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
I1213 14:31:01.551382 1252934 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-172760
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-172760: exit status 85 (188.831803ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-663089 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-663089 │ jenkins │ v1.37.0 │ 13 Dec 25 14:30 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 14:30 UTC │ 13 Dec 25 14:30 UTC │
	│ delete  │ -p download-only-663089                                                                                                                                                               │ download-only-663089 │ jenkins │ v1.37.0 │ 13 Dec 25 14:30 UTC │ 13 Dec 25 14:30 UTC │
	│ start   │ -o=json --download-only -p download-only-172760 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-172760 │ jenkins │ v1.37.0 │ 13 Dec 25 14:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 14:30:56
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 14:30:56.269364 1253144 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:30:56.269498 1253144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:30:56.269507 1253144 out.go:374] Setting ErrFile to fd 2...
	I1213 14:30:56.269512 1253144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:30:56.269780 1253144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 14:30:56.270202 1253144 out.go:368] Setting JSON to true
	I1213 14:30:56.271035 1253144 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22405,"bootTime":1765613851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 14:30:56.271109 1253144 start.go:143] virtualization:  
	I1213 14:30:56.274532 1253144 out.go:99] [download-only-172760] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 14:30:56.274814 1253144 notify.go:221] Checking for updates...
	I1213 14:30:56.277557 1253144 out.go:171] MINIKUBE_LOCATION=22122
	I1213 14:30:56.280586 1253144 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:30:56.283589 1253144 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:30:56.286563 1253144 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 14:30:56.290113 1253144 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1213 14:30:56.295864 1253144 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 14:30:56.296150 1253144 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:30:56.328290 1253144 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 14:30:56.328402 1253144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:30:56.389270 1253144 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-13 14:30:56.380354006 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:30:56.389381 1253144 docker.go:319] overlay module found
	I1213 14:30:56.392382 1253144 out.go:99] Using the docker driver based on user configuration
	I1213 14:30:56.392444 1253144 start.go:309] selected driver: docker
	I1213 14:30:56.392453 1253144 start.go:927] validating driver "docker" against <nil>
	I1213 14:30:56.392614 1253144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:30:56.449269 1253144 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-13 14:30:56.439200158 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:30:56.449418 1253144 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 14:30:56.449710 1253144 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1213 14:30:56.449865 1253144 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 14:30:56.453040 1253144 out.go:171] Using Docker driver with root privileges
	I1213 14:30:56.456046 1253144 cni.go:84] Creating CNI manager for ""
	I1213 14:30:56.456150 1253144 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 14:30:56.456166 1253144 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 14:30:56.456260 1253144 start.go:353] cluster config:
	{Name:download-only-172760 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-172760 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:30:56.459249 1253144 out.go:99] Starting "download-only-172760" primary control-plane node in "download-only-172760" cluster
	I1213 14:30:56.459284 1253144 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 14:30:56.462373 1253144 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1213 14:30:56.462445 1253144 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 14:30:56.462532 1253144 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 14:30:56.478763 1253144 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 14:30:56.478928 1253144 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1213 14:30:56.478954 1253144 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1213 14:30:56.478973 1253144 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1213 14:30:56.478986 1253144 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1213 14:30:56.520740 1253144 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1213 14:30:56.520775 1253144 cache.go:65] Caching tarball of preloaded images
	I1213 14:30:56.520983 1253144 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 14:30:56.524133 1253144 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1213 14:30:56.524171 1253144 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1213 14:30:56.613836 1253144 preload.go:295] Got checksum from GCS API "cd1a05d5493c9270e248bf47fb3f071d"
	I1213 14:30:56.613890 1253144 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:cd1a05d5493c9270e248bf47fb3f071d -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1213 14:31:00.907692 1253144 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1213 14:31:00.908069 1253144 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/download-only-172760/config.json ...
	I1213 14:31:00.908104 1253144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/download-only-172760/config.json: {Name:mk0e1985879610b0936ad804b59288ba34aa0acc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 14:31:00.908295 1253144 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 14:31:00.908480 1253144 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/linux/arm64/v1.34.2/kubectl
	
	
	* The control-plane node download-only-172760 host does not exist
	  To start a cluster, run: "minikube start -p download-only-172760"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-172760
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (4.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-712795 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-712795 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.374453646s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (4.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1213 14:31:06.469542 1252934 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1213 14:31:06.469578 1252934 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-712795
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-712795: exit status 85 (83.334188ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                             ARGS                                                                                             │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-663089 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-663089 │ jenkins │ v1.37.0 │ 13 Dec 25 14:30 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 14:30 UTC │ 13 Dec 25 14:30 UTC │
	│ delete  │ -p download-only-663089                                                                                                                                                                      │ download-only-663089 │ jenkins │ v1.37.0 │ 13 Dec 25 14:30 UTC │ 13 Dec 25 14:30 UTC │
	│ start   │ -o=json --download-only -p download-only-172760 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-172760 │ jenkins │ v1.37.0 │ 13 Dec 25 14:30 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 14:31 UTC │ 13 Dec 25 14:31 UTC │
	│ delete  │ -p download-only-172760                                                                                                                                                                      │ download-only-172760 │ jenkins │ v1.37.0 │ 13 Dec 25 14:31 UTC │ 13 Dec 25 14:31 UTC │
	│ start   │ -o=json --download-only -p download-only-712795 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-712795 │ jenkins │ v1.37.0 │ 13 Dec 25 14:31 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 14:31:02
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 14:31:02.140670 1253340 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:31:02.140873 1253340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:31:02.140900 1253340 out.go:374] Setting ErrFile to fd 2...
	I1213 14:31:02.140920 1253340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:31:02.141315 1253340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 14:31:02.142077 1253340 out.go:368] Setting JSON to true
	I1213 14:31:02.143010 1253340 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22411,"bootTime":1765613851,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 14:31:02.143106 1253340 start.go:143] virtualization:  
	I1213 14:31:02.146686 1253340 out.go:99] [download-only-712795] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 14:31:02.146917 1253340 notify.go:221] Checking for updates...
	I1213 14:31:02.149899 1253340 out.go:171] MINIKUBE_LOCATION=22122
	I1213 14:31:02.152857 1253340 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:31:02.155980 1253340 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:31:02.158852 1253340 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 14:31:02.161745 1253340 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1213 14:31:02.167477 1253340 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 14:31:02.167797 1253340 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:31:02.199984 1253340 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 14:31:02.200098 1253340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:31:02.257053 1253340 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 14:31:02.247646796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:31:02.257178 1253340 docker.go:319] overlay module found
	I1213 14:31:02.260217 1253340 out.go:99] Using the docker driver based on user configuration
	I1213 14:31:02.260266 1253340 start.go:309] selected driver: docker
	I1213 14:31:02.260273 1253340 start.go:927] validating driver "docker" against <nil>
	I1213 14:31:02.260385 1253340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:31:02.312989 1253340 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 14:31:02.303735126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:31:02.313151 1253340 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 14:31:02.313413 1253340 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1213 14:31:02.313567 1253340 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 14:31:02.316630 1253340 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-712795 host does not exist
	  To start a cluster, run: "minikube start -p download-only-712795"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-712795
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1213 14:31:07.773873 1252934 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-418157 --alsologtostderr --binary-mirror http://127.0.0.1:44467 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-418157" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-418157
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-386332
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-386332: exit status 85 (65.832653ms)

                                                
                                                
-- stdout --
	* Profile "addons-386332" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-386332"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-386332
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-386332: exit status 85 (73.708066ms)

                                                
                                                
-- stdout --
	* Profile "addons-386332" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-386332"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (153.91s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-386332 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-386332 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m33.909273515s)
--- PASS: TestAddons/Setup (153.91s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.94s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:886: volcano-controller stabilized in 59.242478ms
addons_test.go:878: volcano-admission stabilized in 59.83314ms
addons_test.go:870: volcano-scheduler stabilized in 59.90632ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-ls4n8" [b57f5de3-b20b-45aa-82fd-6d98e87cbb98] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004625114s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-qcgvc" [ff9a5763-9b54-481a-976e-9d701c8725ea] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.00295651s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-gb5gs" [2570900e-4906-4dae-8e46-6ca65ef2ff7b] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003222724s
addons_test.go:905: (dbg) Run:  kubectl --context addons-386332 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-386332 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-386332 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [83fc01b4-f1ff-491a-a4e6-84bd86f43fab] Pending
helpers_test.go:353: "test-job-nginx-0" [83fc01b4-f1ff-491a-a4e6-84bd86f43fab] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [83fc01b4-f1ff-491a-a4e6-84bd86f43fab] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.005016389s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-386332 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-386332 addons disable volcano --alsologtostderr -v=1: (12.053105909s)
--- PASS: TestAddons/serial/Volcano (40.94s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-386332 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-386332 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.83s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-386332 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-386332 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [5b6d6a58-8ded-4aa5-9ca7-5353c7fe19db] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [5b6d6a58-8ded-4aa5-9ca7-5353c7fe19db] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004279787s
addons_test.go:696: (dbg) Run:  kubectl --context addons-386332 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-386332 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-386332 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-386332 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.83s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 4.300724ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-4z9l7" [f0c11081-5bca-4c24-9d22-0b6969b023dd] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003052435s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-2q4z2" [638d3855-34f3-4085-9c18-e3f289880b02] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003941254s
addons_test.go:394: (dbg) Run:  kubectl --context addons-386332 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-386332 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-386332 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.44230813s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-386332 ip
2025/12/13 14:34:57 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-386332 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.45s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.77s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 4.289552ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-386332
addons_test.go:334: (dbg) Run:  kubectl --context addons-386332 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-386332 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.77s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-386332 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-386332 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-386332 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [3ca98411-8faf-44bd-8475-2fed7ea46d11] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [3ca98411-8faf-44bd-8475-2fed7ea46d11] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003174039s
I1213 14:36:15.254832 1252934 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-386332 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-386332 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-386332 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-386332 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-386332 addons disable ingress-dns --alsologtostderr -v=1: (1.477429031s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-386332 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-386332 addons disable ingress --alsologtostderr -v=1: (7.839019939s)
--- PASS: TestAddons/parallel/Ingress (18.91s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-sg4tq" [6ef3409c-6adb-4701-b5f8-f4905881581b] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003652788s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-386332 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-386332 addons disable inspektor-gadget --alsologtostderr -v=1: (5.790728755s)
--- PASS: TestAddons/parallel/InspektorGadget (11.80s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.98s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 2.981776ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-cn9vf" [73d7afac-6cd0-4ced-acc8-c5bf6df61a4f] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003280605s
addons_test.go:465: (dbg) Run:  kubectl --context addons-386332 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-386332 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.98s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.76s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1213 14:35:24.948427 1252934 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1213 14:35:24.952359 1252934 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1213 14:35:24.952393 1252934 kapi.go:107] duration metric: took 6.955087ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 6.970135ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-386332 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-386332 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-386332 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-386332 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-386332 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-386332 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-386332 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-386332 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-386332 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [ab0d5245-56a5-463c-b270-c2455c278dd1] Pending
helpers_test.go:353: "task-pv-pod" [ab0d5245-56a5-463c-b270-c2455c278dd1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [ab0d5245-56a5-463c-b270-c2455c278dd1] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003452185s
addons_test.go:574: (dbg) Run:  kubectl --context addons-386332 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-386332 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-386332 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-386332 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-386332 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-386332 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-386332 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-386332 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-386332 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-386332 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-386332 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-386332 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-386332 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-386332 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-386332 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [777ee425-cba4-4f4f-b270-bacedbd1f516] Pending
helpers_test.go:353: "task-pv-pod-restore" [777ee425-cba4-4f4f-b270-bacedbd1f516] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [777ee425-cba4-4f4f-b270-bacedbd1f516] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.006182483s
addons_test.go:616: (dbg) Run:  kubectl --context addons-386332 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-386332 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-386332 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-386332 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-386332 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-386332 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.482529541s)
--- PASS: TestAddons/parallel/CSI (41.76s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-386332 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-386332 --alsologtostderr -v=1: (1.399367981s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-29fk2" [f3f6f880-08a3-43ef-8d77-c8e93bdc0c13] Pending
helpers_test.go:353: "headlamp-dfcdc64b-29fk2" [f3f6f880-08a3-43ef-8d77-c8e93bdc0c13] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-29fk2" [f3f6f880-08a3-43ef-8d77-c8e93bdc0c13] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003172064s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-386332 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-386332 addons disable headlamp --alsologtostderr -v=1: (5.967433749s)
--- PASS: TestAddons/parallel/Headlamp (17.37s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.86s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-jw8qg" [332fe2af-7b3c-4fff-85ba-cc62202445f9] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003628833s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-386332 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.86s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.81s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-386332 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-386332 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-386332 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-386332 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-386332 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-386332 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-386332 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-386332 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [6377778d-4a85-4f6a-b150-d3540fa271c6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [6377778d-4a85-4f6a-b150-d3540fa271c6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [6377778d-4a85-4f6a-b150-d3540fa271c6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003199299s
addons_test.go:969: (dbg) Run:  kubectl --context addons-386332 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-386332 ssh "cat /opt/local-path-provisioner/pvc-e1456ddf-8c24-4fd0-b5df-10df01d26d12_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-386332 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-386332 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-386332 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-386332 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.398756776s)
--- PASS: TestAddons/parallel/LocalPath (52.81s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.21s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-j7zmt" [ac70647c-8815-4db7-aed2-18f4feabebf6] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003740249s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-386332 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-386332 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.205411976s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.21s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-nrhbt" [5c767fc7-bfa0-4d87-8af0-4df2d7e2ef40] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003705777s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-386332 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-386332 addons disable yakd --alsologtostderr -v=1: (5.866399445s)
--- PASS: TestAddons/parallel/Yakd (11.87s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-386332
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-386332: (12.089593633s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-386332
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-386332
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-386332
--- PASS: TestAddons/StoppedEnableDisable (12.37s)

                                                
                                    
x
+
TestCertOptions (36.78s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-451262 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E1213 15:53:23.671960 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:53:42.555464 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-451262 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (33.641116177s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-451262 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-451262 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-451262 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-451262" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-451262
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-451262: (2.260333319s)
--- PASS: TestCertOptions (36.78s)

                                                
                                    
x
+
TestCertExpiration (221.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-652483 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-652483 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (30.726998272s)
E1213 15:51:41.251701 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-652483 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-652483 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.810124326s)
helpers_test.go:176: Cleaning up "cert-expiration-652483" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-652483
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-652483: (2.757879353s)
--- PASS: TestCertExpiration (221.30s)

                                                
                                    
x
+
TestForceSystemdFlag (34.13s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-033518 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-033518 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (31.707330261s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-033518 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-033518" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-033518
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-033518: (2.118306135s)
--- PASS: TestForceSystemdFlag (34.13s)

                                                
                                    
x
+
TestForceSystemdEnv (37.24s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-206382 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1213 15:50:18.171456 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-206382 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.815733965s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-206382 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-206382" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-206382
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-206382: (2.07772102s)
--- PASS: TestForceSystemdEnv (37.24s)

                                                
                                    
x
+
TestDockerEnvContainerd (49.82s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-201481 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-201481 --driver=docker  --container-runtime=containerd: (33.823594756s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-201481"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-201481": (1.133586545s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-yJGiWPoFsVA4/agent.1272510" SSH_AGENT_PID="1272511" DOCKER_HOST=ssh://docker@127.0.0.1:33903 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-yJGiWPoFsVA4/agent.1272510" SSH_AGENT_PID="1272511" DOCKER_HOST=ssh://docker@127.0.0.1:33903 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-yJGiWPoFsVA4/agent.1272510" SSH_AGENT_PID="1272511" DOCKER_HOST=ssh://docker@127.0.0.1:33903 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.298243033s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-yJGiWPoFsVA4/agent.1272510" SSH_AGENT_PID="1272511" DOCKER_HOST=ssh://docker@127.0.0.1:33903 docker image ls"
helpers_test.go:176: Cleaning up "dockerenv-201481" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-201481
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-201481: (2.169425182s)
--- PASS: TestDockerEnvContainerd (49.82s)

                                                
                                    
x
+
TestErrorSpam/setup (31.08s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-670667 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-670667 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-670667 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-670667 --driver=docker  --container-runtime=containerd: (31.082633233s)
--- PASS: TestErrorSpam/setup (31.08s)

                                                
                                    
x
+
TestErrorSpam/start (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-670667 --log_dir /tmp/nospam-670667 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-670667 --log_dir /tmp/nospam-670667 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-670667 --log_dir /tmp/nospam-670667 start --dry-run
--- PASS: TestErrorSpam/start (0.85s)

                                                
                                    
x
+
TestErrorSpam/status (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-670667 --log_dir /tmp/nospam-670667 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-670667 --log_dir /tmp/nospam-670667 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-670667 --log_dir /tmp/nospam-670667 status
--- PASS: TestErrorSpam/status (1.12s)

                                                
                                    
x
+
TestErrorSpam/pause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-670667 --log_dir /tmp/nospam-670667 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-670667 --log_dir /tmp/nospam-670667 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-670667 --log_dir /tmp/nospam-670667 pause
--- PASS: TestErrorSpam/pause (1.73s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.77s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-670667 --log_dir /tmp/nospam-670667 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-670667 --log_dir /tmp/nospam-670667 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-670667 --log_dir /tmp/nospam-670667 unpause
--- PASS: TestErrorSpam/unpause (1.77s)

                                                
                                    
x
+
TestErrorSpam/stop (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-670667 --log_dir /tmp/nospam-670667 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-670667 --log_dir /tmp/nospam-670667 stop: (1.468504734s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-670667 --log_dir /tmp/nospam-670667 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-670667 --log_dir /tmp/nospam-670667 stop
--- PASS: TestErrorSpam/stop (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.31s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-831661 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1213 14:38:42.562357 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:38:42.568858 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:38:42.580143 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:38:42.601426 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:38:42.642733 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:38:42.724081 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:38:42.885513 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:38:43.207107 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:38:43.848471 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:38:45.129800 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:38:47.692660 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:38:52.814682 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:39:03.056191 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-831661 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (52.309536256s)
--- PASS: TestFunctional/serial/StartWithProxy (52.31s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1213 14:39:09.119143 1252934 config.go:182] Loaded profile config "functional-831661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-831661 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-831661 --alsologtostderr -v=8: (6.994447406s)
functional_test.go:678: soft start took 6.996583103s for "functional-831661" cluster.
I1213 14:39:16.113904 1252934 config.go:182] Loaded profile config "functional-831661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (7.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-831661 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-831661 cache add registry.k8s.io/pause:3.1: (1.263476537s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-831661 cache add registry.k8s.io/pause:3.3: (1.167471273s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-831661 cache add registry.k8s.io/pause:latest: (1.046002441s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-831661 /tmp/TestFunctionalserialCacheCmdcacheadd_local2768695108/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 cache add minikube-local-cache-test:functional-831661
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 cache delete minikube-local-cache-test:functional-831661
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-831661
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-831661 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (298.063344ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 kubectl -- --context functional-831661 get pods
E1213 14:39:23.538092 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-831661 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.83s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-831661 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1213 14:40:04.500309 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-831661 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.826671838s)
functional_test.go:776: restart took 44.826782285s for "functional-831661" cluster.
I1213 14:40:08.510812 1252934 config.go:182] Loaded profile config "functional-831661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (44.83s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-831661 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-831661 logs: (1.487207548s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 logs --file /tmp/TestFunctionalserialLogsFileCmd2848732130/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-831661 logs --file /tmp/TestFunctionalserialLogsFileCmd2848732130/001/logs.txt: (1.532061461s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.58s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-831661 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-831661
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-831661: exit status 115 (800.091871ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30243 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-831661 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.58s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-831661 config get cpus: exit status 14 (82.508031ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-831661 config get cpus: exit status 14 (71.95022ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-831661 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-831661 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 1287939: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.39s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-831661 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-831661 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (183.027129ms)

                                                
                                                
-- stdout --
	* [functional-831661] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 14:40:47.608291 1287458 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:40:47.608488 1287458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:40:47.608514 1287458 out.go:374] Setting ErrFile to fd 2...
	I1213 14:40:47.608531 1287458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:40:47.608826 1287458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 14:40:47.609228 1287458 out.go:368] Setting JSON to false
	I1213 14:40:47.610247 1287458 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22996,"bootTime":1765613851,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 14:40:47.610360 1287458 start.go:143] virtualization:  
	I1213 14:40:47.613641 1287458 out.go:179] * [functional-831661] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 14:40:47.616877 1287458 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 14:40:47.616956 1287458 notify.go:221] Checking for updates...
	I1213 14:40:47.620697 1287458 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:40:47.623635 1287458 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:40:47.626528 1287458 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 14:40:47.629323 1287458 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 14:40:47.632304 1287458 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 14:40:47.635694 1287458 config.go:182] Loaded profile config "functional-831661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 14:40:47.636263 1287458 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:40:47.661140 1287458 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 14:40:47.661263 1287458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:40:47.721381 1287458 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 14:40:47.71179903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:40:47.721493 1287458 docker.go:319] overlay module found
	I1213 14:40:47.724654 1287458 out.go:179] * Using the docker driver based on existing profile
	I1213 14:40:47.727441 1287458 start.go:309] selected driver: docker
	I1213 14:40:47.727466 1287458 start.go:927] validating driver "docker" against &{Name:functional-831661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-831661 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:40:47.727598 1287458 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 14:40:47.731116 1287458 out.go:203] 
	W1213 14:40:47.733987 1287458 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 14:40:47.736826 1287458 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-831661 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-831661 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-831661 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (197.773386ms)

                                                
                                                
-- stdout --
	* [functional-831661] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 14:40:47.414358 1287409 out.go:360] Setting OutFile to fd 1 ...
	I1213 14:40:47.414559 1287409 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:40:47.414569 1287409 out.go:374] Setting ErrFile to fd 2...
	I1213 14:40:47.414574 1287409 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 14:40:47.415723 1287409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 14:40:47.416163 1287409 out.go:368] Setting JSON to false
	I1213 14:40:47.417263 1287409 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22996,"bootTime":1765613851,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 14:40:47.417351 1287409 start.go:143] virtualization:  
	I1213 14:40:47.422372 1287409 out.go:179] * [functional-831661] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1213 14:40:47.425506 1287409 notify.go:221] Checking for updates...
	I1213 14:40:47.429642 1287409 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 14:40:47.432642 1287409 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 14:40:47.435535 1287409 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 14:40:47.438315 1287409 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 14:40:47.441145 1287409 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 14:40:47.444019 1287409 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 14:40:47.447410 1287409 config.go:182] Loaded profile config "functional-831661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 14:40:47.448000 1287409 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 14:40:47.472606 1287409 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 14:40:47.472732 1287409 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 14:40:47.539046 1287409 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 14:40:47.528409463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 14:40:47.539163 1287409 docker.go:319] overlay module found
	I1213 14:40:47.542415 1287409 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1213 14:40:47.545217 1287409 start.go:309] selected driver: docker
	I1213 14:40:47.545244 1287409 start.go:927] validating driver "docker" against &{Name:functional-831661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-831661 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 14:40:47.545366 1287409 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 14:40:47.548803 1287409 out.go:203] 
	W1213 14:40:47.551562 1287409 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 14:40:47.554352 1287409 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-831661 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-831661 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-l8df7" [aaeda62d-61d3-4975-b9a8-20e2a89a4a9a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-l8df7" [aaeda62d-61d3-4975-b9a8-20e2a89a4a9a] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003589436s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31451
functional_test.go:1680: http://192.168.49.2:31451: success! body:
Request served by hello-node-connect-7d85dfc575-l8df7

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31451
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (21.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [23b02876-3bb1-44d6-95d7-bfe58f49bda6] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003740894s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-831661 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-831661 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-831661 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-831661 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [15bf7216-a8a3-4cf2-976b-db6d4b49bdb9] Pending
helpers_test.go:353: "sp-pod" [15bf7216-a8a3-4cf2-976b-db6d4b49bdb9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [15bf7216-a8a3-4cf2-976b-db6d4b49bdb9] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003821989s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-831661 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-831661 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-831661 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [84579101-e600-4695-8c2a-bd0bae97161c] Pending
helpers_test.go:353: "sp-pod" [84579101-e600-4695-8c2a-bd0bae97161c] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003714874s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-831661 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (21.94s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh -n functional-831661 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 cp functional-831661:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3551924185/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh -n functional-831661 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh -n functional-831661 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1252934/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh "sudo cat /etc/test/nested/copy/1252934/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1252934.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh "sudo cat /etc/ssl/certs/1252934.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1252934.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh "sudo cat /usr/share/ca-certificates/1252934.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/12529342.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh "sudo cat /etc/ssl/certs/12529342.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/12529342.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh "sudo cat /usr/share/ca-certificates/12529342.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-831661 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-831661 ssh "sudo systemctl is-active docker": exit status 1 (285.479846ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-831661 ssh "sudo systemctl is-active crio": exit status 1 (286.899697ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-831661 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-831661 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-831661 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-831661 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 1284854: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-831661 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-831661 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [b58aa588-51dc-4298-8081-8b23db80da12] Pending
helpers_test.go:353: "nginx-svc" [b58aa588-51dc-4298-8081-8b23db80da12] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [b58aa588-51dc-4298-8081-8b23db80da12] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003593068s
I1213 14:40:27.582182 1252934 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-831661 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.93.161 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-831661 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-831661 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-831661 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-bptzh" [fcbc0e0e-8aea-4758-8f97-f2328398dd0a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-bptzh" [fcbc0e0e-8aea-4758-8f97-f2328398dd0a] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003339714s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "383.495362ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "57.244139ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "374.753165ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "54.116658ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-831661 /tmp/TestFunctionalparallelMountCmdany-port855379292/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765636841908615297" to /tmp/TestFunctionalparallelMountCmdany-port855379292/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765636841908615297" to /tmp/TestFunctionalparallelMountCmdany-port855379292/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765636841908615297" to /tmp/TestFunctionalparallelMountCmdany-port855379292/001/test-1765636841908615297
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-831661 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (423.331319ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 14:40:42.334942 1252934 retry.go:31] will retry after 283.373538ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 14:40 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 14:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 14:40 test-1765636841908615297
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh cat /mount-9p/test-1765636841908615297
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-831661 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [85d61df3-736f-4ddf-940b-1245becf82b2] Pending
helpers_test.go:353: "busybox-mount" [85d61df3-736f-4ddf-940b-1245becf82b2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [85d61df3-736f-4ddf-940b-1245becf82b2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [85d61df3-736f-4ddf-940b-1245becf82b2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003659949s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-831661 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-831661 /tmp/TestFunctionalparallelMountCmdany-port855379292/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 service list -o json
functional_test.go:1504: Took "525.255426ms" to run "out/minikube-linux-arm64 -p functional-831661 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31658
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31658
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-831661 /tmp/TestFunctionalparallelMountCmdspecific-port2333833003/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-831661 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (424.302271ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 14:40:50.661661 1252934 retry.go:31] will retry after 341.124901ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-831661 /tmp/TestFunctionalparallelMountCmdspecific-port2333833003/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-831661 ssh "sudo umount -f /mount-9p": exit status 1 (335.32601ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-831661 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-831661 /tmp/TestFunctionalparallelMountCmdspecific-port2333833003/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-831661 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3399558197/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-831661 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3399558197/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-831661 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3399558197/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-831661 ssh "findmnt -T" /mount1: exit status 1 (1.046625237s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 14:40:53.316272 1252934 retry.go:31] will retry after 573.80032ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-831661 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-831661 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3399558197/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-831661 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3399558197/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-831661 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3399558197/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.96s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 version --short
--- PASS: TestFunctional/parallel/Version/short (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-831661 version -o=json --components: (1.499324541s)
--- PASS: TestFunctional/parallel/Version/components (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-831661 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-831661
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-831661
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-831661 image ls --format short --alsologtostderr:
I1213 14:41:03.275054 1290524 out.go:360] Setting OutFile to fd 1 ...
I1213 14:41:03.275297 1290524 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 14:41:03.275363 1290524 out.go:374] Setting ErrFile to fd 2...
I1213 14:41:03.275385 1290524 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 14:41:03.275722 1290524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
I1213 14:41:03.276481 1290524 config.go:182] Loaded profile config "functional-831661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 14:41:03.276669 1290524 config.go:182] Loaded profile config "functional-831661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 14:41:03.277359 1290524 cli_runner.go:164] Run: docker container inspect functional-831661 --format={{.State.Status}}
I1213 14:41:03.295212 1290524 ssh_runner.go:195] Run: systemctl --version
I1213 14:41:03.295261 1290524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-831661
I1213 14:41:03.327134 1290524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33913 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-831661/id_rsa Username:docker}
I1213 14:41:03.451934 1290524 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-831661 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager     │ v1.34.2            │ sha256:1b3491 │ 20.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.2            │ sha256:94bff1 │ 22.8MB │
│ docker.io/library/minikube-local-cache-test │ functional-831661  │ sha256:e02605 │ 992B   │
│ public.ecr.aws/nginx/nginx                  │ alpine             │ sha256:10afed │ 23MB   │
│ registry.k8s.io/kube-apiserver              │ v1.34.2            │ sha256:b178af │ 24.6MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/kube-scheduler              │ v1.34.2            │ sha256:4f982e │ 15.8MB │
│ docker.io/kicbase/echo-server               │ functional-831661  │ sha256:ce2d2c │ 2.17MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-831661 image ls --format table --alsologtostderr:
I1213 14:41:03.561868 1290598 out.go:360] Setting OutFile to fd 1 ...
I1213 14:41:03.562486 1290598 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 14:41:03.562501 1290598 out.go:374] Setting ErrFile to fd 2...
I1213 14:41:03.562521 1290598 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 14:41:03.562816 1290598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
I1213 14:41:03.563489 1290598 config.go:182] Loaded profile config "functional-831661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 14:41:03.563603 1290598 config.go:182] Loaded profile config "functional-831661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 14:41:03.564113 1290598 cli_runner.go:164] Run: docker container inspect functional-831661 --format={{.State.Status}}
I1213 14:41:03.591472 1290598 ssh_runner.go:195] Run: systemctl --version
I1213 14:41:03.591533 1290598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-831661
I1213 14:41:03.612748 1290598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33913 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-831661/id_rsa Username:docker}
I1213 14:41:03.722044 1290598 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-831661 image ls --format json --alsologtostderr:
[{"id":"sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"20718696"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:8cb2091f603e75187e2f6226
c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-831661"],"size":"2173567"},{"id":"sha256:e026052059b45d788f94e5aa4af0bc6e32bbfa2d449adbca80836f551dadd042","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-831661"],"size":"992"},{"id":"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"22802260"},{"id":"sha256:1611cd07b61d57dbb
febe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"22985759"},{"id":"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/ku
be-apiserver:v1.34.2"],"size":"24559643"},{"id":"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"15775785"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d2
2152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-831661 image ls --format json --alsologtostderr:
I1213 14:41:03.270478 1290525 out.go:360] Setting OutFile to fd 1 ...
I1213 14:41:03.270698 1290525 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 14:41:03.270726 1290525 out.go:374] Setting ErrFile to fd 2...
I1213 14:41:03.270746 1290525 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 14:41:03.271032 1290525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
I1213 14:41:03.271744 1290525 config.go:182] Loaded profile config "functional-831661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 14:41:03.271913 1290525 config.go:182] Loaded profile config "functional-831661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 14:41:03.272482 1290525 cli_runner.go:164] Run: docker container inspect functional-831661 --format={{.State.Status}}
I1213 14:41:03.295202 1290525 ssh_runner.go:195] Run: systemctl --version
I1213 14:41:03.295258 1290525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-831661
I1213 14:41:03.325529 1290525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33913 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-831661/id_rsa Username:docker}
I1213 14:41:03.430537 1290525 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-831661 image ls --format yaml --alsologtostderr:
- id: sha256:e026052059b45d788f94e5aa4af0bc6e32bbfa2d449adbca80836f551dadd042
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-831661
size: "992"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "24559643"
- id: sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "20718696"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-831661
size: "2173567"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "22802260"
- id: sha256:10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "22985759"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "15775785"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-831661 image ls --format yaml --alsologtostderr:
I1213 14:41:03.837709 1290693 out.go:360] Setting OutFile to fd 1 ...
I1213 14:41:03.837821 1290693 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 14:41:03.837834 1290693 out.go:374] Setting ErrFile to fd 2...
I1213 14:41:03.837839 1290693 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 14:41:03.838169 1290693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
I1213 14:41:03.838797 1290693 config.go:182] Loaded profile config "functional-831661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 14:41:03.838918 1290693 config.go:182] Loaded profile config "functional-831661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 14:41:03.839492 1290693 cli_runner.go:164] Run: docker container inspect functional-831661 --format={{.State.Status}}
I1213 14:41:03.866057 1290693 ssh_runner.go:195] Run: systemctl --version
I1213 14:41:03.866120 1290693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-831661
I1213 14:41:03.888714 1290693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33913 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-831661/id_rsa Username:docker}
I1213 14:41:04.007218 1290693 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-831661 ssh pgrep buildkitd: exit status 1 (362.582802ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 image build -t localhost/my-image:functional-831661 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-831661 image build -t localhost/my-image:functional-831661 testdata/build --alsologtostderr: (3.524544815s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-831661 image build -t localhost/my-image:functional-831661 testdata/build --alsologtostderr:
I1213 14:41:03.945037 1290719 out.go:360] Setting OutFile to fd 1 ...
I1213 14:41:03.948345 1290719 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 14:41:03.948381 1290719 out.go:374] Setting ErrFile to fd 2...
I1213 14:41:03.948389 1290719 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 14:41:03.948685 1290719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
I1213 14:41:03.949392 1290719 config.go:182] Loaded profile config "functional-831661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 14:41:03.953195 1290719 config.go:182] Loaded profile config "functional-831661": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 14:41:03.953755 1290719 cli_runner.go:164] Run: docker container inspect functional-831661 --format={{.State.Status}}
I1213 14:41:03.972615 1290719 ssh_runner.go:195] Run: systemctl --version
I1213 14:41:03.972680 1290719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-831661
I1213 14:41:03.991029 1290719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33913 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-831661/id_rsa Username:docker}
I1213 14:41:04.107304 1290719 build_images.go:162] Building image from path: /tmp/build.3122124269.tar
I1213 14:41:04.107405 1290719 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 14:41:04.115961 1290719 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3122124269.tar
I1213 14:41:04.119805 1290719 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3122124269.tar: stat -c "%s %y" /var/lib/minikube/build/build.3122124269.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3122124269.tar': No such file or directory
I1213 14:41:04.119879 1290719 ssh_runner.go:362] scp /tmp/build.3122124269.tar --> /var/lib/minikube/build/build.3122124269.tar (3072 bytes)
I1213 14:41:04.137235 1290719 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3122124269
I1213 14:41:04.145297 1290719 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3122124269 -xf /var/lib/minikube/build/build.3122124269.tar
I1213 14:41:04.153780 1290719 containerd.go:394] Building image: /var/lib/minikube/build/build.3122124269
I1213 14:41:04.153852 1290719 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3122124269 --local dockerfile=/var/lib/minikube/build/build.3122124269 --output type=image,name=localhost/my-image:functional-831661
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.5s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:c3f3b8121e441d34d58980384ef3650e87948946b5da78e3792bf910842b4dbd
#8 exporting manifest sha256:c3f3b8121e441d34d58980384ef3650e87948946b5da78e3792bf910842b4dbd 0.0s done
#8 exporting config sha256:b06228b93ecd255520ea5ba604e2410218552ae270170533677fbfaa04b9c20d 0.0s done
#8 naming to localhost/my-image:functional-831661 done
#8 DONE 0.2s
I1213 14:41:07.373938 1290719 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3122124269 --local dockerfile=/var/lib/minikube/build/build.3122124269 --output type=image,name=localhost/my-image:functional-831661: (3.220047842s)
I1213 14:41:07.374021 1290719 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3122124269
I1213 14:41:07.382249 1290719 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3122124269.tar
I1213 14:41:07.389979 1290719 build_images.go:218] Built localhost/my-image:functional-831661 from /tmp/build.3122124269.tar
I1213 14:41:07.390009 1290719 build_images.go:134] succeeded building to: functional-831661
I1213 14:41:07.390014 1290719 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-831661
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 image load --daemon kicbase/echo-server:functional-831661 --alsologtostderr
2025/12/13 14:40:57 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-831661 image load --daemon kicbase/echo-server:functional-831661 --alsologtostderr: (1.005976375s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 image load --daemon kicbase/echo-server:functional-831661 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-831661
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 image load --daemon kicbase/echo-server:functional-831661 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-arm64 -p functional-831661 image load --daemon kicbase/echo-server:functional-831661 --alsologtostderr: (1.192640205s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 image save kicbase/echo-server:functional-831661 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 image rm kicbase/echo-server:functional-831661 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-831661
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-831661 image save --daemon kicbase/echo-server:functional-831661 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-831661
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-831661
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-831661
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-831661
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-562018 cache add registry.k8s.io/pause:3.1: (1.147115045s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-562018 cache add registry.k8s.io/pause:3.3: (1.096149853s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-562018 cache add registry.k8s.io/pause:latest: (1.164066546s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-562018 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach719038861/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 cache add minikube-local-cache-test:functional-562018
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 cache delete minikube-local-cache-test:functional-562018
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-562018
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562018 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (286.378198ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3733627292/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562018 config get cpus: exit status 14 (74.990591ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562018 config get cpus: exit status 14 (67.11546ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-562018 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-562018 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (200.250548ms)

                                                
                                                
-- stdout --
	* [functional-562018] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 15:10:14.471164 1321594 out.go:360] Setting OutFile to fd 1 ...
	I1213 15:10:14.471382 1321594 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:10:14.471396 1321594 out.go:374] Setting ErrFile to fd 2...
	I1213 15:10:14.471402 1321594 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:10:14.471693 1321594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 15:10:14.472114 1321594 out.go:368] Setting JSON to false
	I1213 15:10:14.473050 1321594 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24763,"bootTime":1765613851,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 15:10:14.473129 1321594 start.go:143] virtualization:  
	I1213 15:10:14.477020 1321594 out.go:179] * [functional-562018] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 15:10:14.480050 1321594 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 15:10:14.480176 1321594 notify.go:221] Checking for updates...
	I1213 15:10:14.486146 1321594 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 15:10:14.489081 1321594 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 15:10:14.492053 1321594 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 15:10:14.494922 1321594 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 15:10:14.497915 1321594 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 15:10:14.501193 1321594 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 15:10:14.501796 1321594 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 15:10:14.528323 1321594 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 15:10:14.528470 1321594 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 15:10:14.602985 1321594 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 15:10:14.591249475 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 15:10:14.603101 1321594 docker.go:319] overlay module found
	I1213 15:10:14.606160 1321594 out.go:179] * Using the docker driver based on existing profile
	I1213 15:10:14.609072 1321594 start.go:309] selected driver: docker
	I1213 15:10:14.609098 1321594 start.go:927] validating driver "docker" against &{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 15:10:14.609223 1321594 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 15:10:14.612665 1321594 out.go:203] 
	W1213 15:10:14.615628 1321594 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 15:10:14.618470 1321594 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-562018 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-562018 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-562018 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (218.873474ms)

                                                
                                                
-- stdout --
	* [functional-562018] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 15:10:14.924491 1321719 out.go:360] Setting OutFile to fd 1 ...
	I1213 15:10:14.924614 1321719 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:10:14.924624 1321719 out.go:374] Setting ErrFile to fd 2...
	I1213 15:10:14.924629 1321719 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:10:14.925025 1321719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 15:10:14.925420 1321719 out.go:368] Setting JSON to false
	I1213 15:10:14.926253 1321719 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24764,"bootTime":1765613851,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 15:10:14.926325 1321719 start.go:143] virtualization:  
	I1213 15:10:14.929616 1321719 out.go:179] * [functional-562018] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1213 15:10:14.933349 1321719 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 15:10:14.933442 1321719 notify.go:221] Checking for updates...
	I1213 15:10:14.939184 1321719 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 15:10:14.942058 1321719 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 15:10:14.944885 1321719 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 15:10:14.947818 1321719 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 15:10:14.950726 1321719 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 15:10:14.954103 1321719 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 15:10:14.954711 1321719 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 15:10:14.977567 1321719 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 15:10:14.977713 1321719 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 15:10:15.066292 1321719 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 15:10:15.055562981 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 15:10:15.066417 1321719 docker.go:319] overlay module found
	I1213 15:10:15.069497 1321719 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1213 15:10:15.072536 1321719 start.go:309] selected driver: docker
	I1213 15:10:15.072573 1321719 start.go:927] validating driver "docker" against &{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 15:10:15.072699 1321719 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 15:10:15.076744 1321719 out.go:203] 
	W1213 15:10:15.079852 1321719 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 15:10:15.082795 1321719 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh -n functional-562018 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 cp functional-562018:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp4290011930/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh -n functional-562018 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh -n functional-562018 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1252934/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh "sudo cat /etc/test/nested/copy/1252934/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (2.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1252934.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh "sudo cat /etc/ssl/certs/1252934.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1252934.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh "sudo cat /usr/share/ca-certificates/1252934.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/12529342.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh "sudo cat /etc/ssl/certs/12529342.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/12529342.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh "sudo cat /usr/share/ca-certificates/12529342.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (2.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562018 ssh "sudo systemctl is-active docker": exit status 1 (380.940105ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562018 ssh "sudo systemctl is-active crio": exit status 1 (359.558856ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-562018 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-562018
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-562018
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-562018 image ls --format short --alsologtostderr:
I1213 15:10:17.907704 1322365 out.go:360] Setting OutFile to fd 1 ...
I1213 15:10:17.907822 1322365 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 15:10:17.907833 1322365 out.go:374] Setting ErrFile to fd 2...
I1213 15:10:17.907840 1322365 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 15:10:17.908092 1322365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
I1213 15:10:17.908749 1322365 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 15:10:17.908897 1322365 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 15:10:17.909412 1322365 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
I1213 15:10:17.926634 1322365 ssh_runner.go:195] Run: systemctl --version
I1213 15:10:17.926698 1322365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
I1213 15:10:17.944403 1322365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
I1213 15:10:18.050483 1322365 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-562018 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/coredns/coredns             │ v1.13.1            │ sha256:e08f4d │ 21.2MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0     │ sha256:404c2e │ 22.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0     │ sha256:68b5f7 │ 20.7MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0     │ sha256:163787 │ 15.4MB │
│ docker.io/kicbase/echo-server               │ functional-562018  │ sha256:ce2d2c │ 2.17MB │
│ localhost/my-image                          │ functional-562018  │ sha256:acdceb │ 831kB  │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0     │ sha256:ccd634 │ 24.7MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ docker.io/library/minikube-local-cache-test │ functional-562018  │ sha256:e02605 │ 992B   │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-562018 image ls --format table --alsologtostderr:
I1213 15:10:22.168092 1322769 out.go:360] Setting OutFile to fd 1 ...
I1213 15:10:22.168262 1322769 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 15:10:22.168272 1322769 out.go:374] Setting ErrFile to fd 2...
I1213 15:10:22.168277 1322769 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 15:10:22.168520 1322769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
I1213 15:10:22.169151 1322769 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 15:10:22.169272 1322769 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 15:10:22.169807 1322769 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
I1213 15:10:22.187439 1322769 ssh_runner.go:195] Run: systemctl --version
I1213 15:10:22.187495 1322769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
I1213 15:10:22.204400 1322769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
I1213 15:10:22.310010 1322769 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-562018 image ls --format json --alsologtostderr:
[{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"},{"id":"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"22429671"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a",
"repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-562018"],"size":"2173567"},{"id":"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"20661043"},{"id":"sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"15391364"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b4610899694
49f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:e026052059b45d788f94e5aa4af0bc6e32bbfa2d449adbca80836f551dadd042","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-562018"],"size":"992"},{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21168808"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e14
4146d0246e58"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"24678359"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:acdceb19f63104a58da256dad168d902af0a1e5017b8bd59dbaccc8f16472693","repoDigests":[],"repoTags":["localhost/my-image:functional-562018"],"size":"830616"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-562018 image ls --format json --alsologtostderr:
I1213 15:10:21.914296 1322729 out.go:360] Setting OutFile to fd 1 ...
I1213 15:10:21.914596 1322729 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 15:10:21.914631 1322729 out.go:374] Setting ErrFile to fd 2...
I1213 15:10:21.914651 1322729 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 15:10:21.914955 1322729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
I1213 15:10:21.915679 1322729 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 15:10:21.915808 1322729 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 15:10:21.916348 1322729 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
I1213 15:10:21.933592 1322729 ssh_runner.go:195] Run: systemctl --version
I1213 15:10:21.933653 1322729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
I1213 15:10:21.951079 1322729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
I1213 15:10:22.060377 1322729 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 image ls --format yaml --alsologtostderr
E1213 15:10:18.171411 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-562018 image ls --format yaml --alsologtostderr:
- id: sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "20661043"
- id: sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "22429671"
- id: sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "15391364"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:e026052059b45d788f94e5aa4af0bc6e32bbfa2d449adbca80836f551dadd042
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-562018
size: "992"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "24678359"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-562018
size: "2173567"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21168808"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-562018 image ls --format yaml --alsologtostderr:
I1213 15:10:18.143853 1322402 out.go:360] Setting OutFile to fd 1 ...
I1213 15:10:18.144004 1322402 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 15:10:18.144030 1322402 out.go:374] Setting ErrFile to fd 2...
I1213 15:10:18.144048 1322402 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 15:10:18.144378 1322402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
I1213 15:10:18.145083 1322402 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 15:10:18.145250 1322402 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 15:10:18.145850 1322402 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
I1213 15:10:18.164077 1322402 ssh_runner.go:195] Run: systemctl --version
I1213 15:10:18.164150 1322402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
I1213 15:10:18.185831 1322402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
I1213 15:10:18.290049 1322402 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562018 ssh pgrep buildkitd: exit status 1 (279.471974ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 image build -t localhost/my-image:functional-562018 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-562018 image build -t localhost/my-image:functional-562018 testdata/build --alsologtostderr: (3.015601205s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-562018 image build -t localhost/my-image:functional-562018 testdata/build --alsologtostderr:
I1213 15:10:18.661032 1322508 out.go:360] Setting OutFile to fd 1 ...
I1213 15:10:18.661163 1322508 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 15:10:18.661175 1322508 out.go:374] Setting ErrFile to fd 2...
I1213 15:10:18.661181 1322508 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 15:10:18.661421 1322508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
I1213 15:10:18.662029 1322508 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 15:10:18.662681 1322508 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 15:10:18.663281 1322508 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
I1213 15:10:18.681094 1322508 ssh_runner.go:195] Run: systemctl --version
I1213 15:10:18.681163 1322508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
I1213 15:10:18.702133 1322508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
I1213 15:10:18.806351 1322508 build_images.go:162] Building image from path: /tmp/build.3497685662.tar
I1213 15:10:18.806424 1322508 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 15:10:18.815167 1322508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3497685662.tar
I1213 15:10:18.819093 1322508 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3497685662.tar: stat -c "%s %y" /var/lib/minikube/build/build.3497685662.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3497685662.tar': No such file or directory
I1213 15:10:18.819125 1322508 ssh_runner.go:362] scp /tmp/build.3497685662.tar --> /var/lib/minikube/build/build.3497685662.tar (3072 bytes)
I1213 15:10:18.839153 1322508 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3497685662
I1213 15:10:18.847481 1322508 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3497685662 -xf /var/lib/minikube/build/build.3497685662.tar
I1213 15:10:18.855389 1322508 containerd.go:394] Building image: /var/lib/minikube/build/build.3497685662
I1213 15:10:18.855476 1322508 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3497685662 --local dockerfile=/var/lib/minikube/build/build.3497685662 --output type=image,name=localhost/my-image:functional-562018
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:28a7ee0cb223d4842fa7fb7ee6107a6e223017a34d2e31996dc6d675d16bd999 0.0s done
#8 exporting config sha256:acdceb19f63104a58da256dad168d902af0a1e5017b8bd59dbaccc8f16472693 0.0s done
#8 naming to localhost/my-image:functional-562018 done
#8 DONE 0.2s
I1213 15:10:21.601391 1322508 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3497685662 --local dockerfile=/var/lib/minikube/build/build.3497685662 --output type=image,name=localhost/my-image:functional-562018: (2.745884989s)
I1213 15:10:21.601481 1322508 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3497685662
I1213 15:10:21.609412 1322508 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3497685662.tar
I1213 15:10:21.617083 1322508 build_images.go:218] Built localhost/my-image:functional-562018 from /tmp/build.3497685662.tar
I1213 15:10:21.617114 1322508 build_images.go:134] succeeded building to: functional-562018
I1213 15:10:21.617119 1322508 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-562018
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 image load --daemon kicbase/echo-server:functional-562018 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-562018 image load --daemon kicbase/echo-server:functional-562018 --alsologtostderr: (1.155362326s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 image load --daemon kicbase/echo-server:functional-562018 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-562018 image load --daemon kicbase/echo-server:functional-562018 --alsologtostderr: (1.074461229s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-562018
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 image load --daemon kicbase/echo-server:functional-562018 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-arm64 -p functional-562018 image load --daemon kicbase/echo-server:functional-562018 --alsologtostderr: (1.052162467s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 image save kicbase/echo-server:functional-562018 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 image rm kicbase/echo-server:functional-562018 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-562018
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 image save --daemon kicbase/echo-server:functional-562018 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-562018
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-562018 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-562018 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "335.125515ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "55.004899ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "335.971041ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "51.816986ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-562018 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3096654653/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562018 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (364.655224ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 15:10:11.020500 1252934 retry.go:31] will retry after 516.317537ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-562018 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3096654653/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562018 ssh "sudo umount -f /mount-9p": exit status 1 (293.956104ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-562018 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-562018 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3096654653/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-562018 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1452398466/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-562018 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1452398466/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-562018 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1452398466/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562018 ssh "findmnt -T" /mount1: exit status 1 (600.405342ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 15:10:13.232172 1252934 retry.go:31] will retry after 304.286529ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-562018 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-562018 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-562018 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1452398466/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-562018 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1452398466/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-562018 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1452398466/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-562018
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-562018
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-562018
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (172.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1213 15:13:23.671490 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:13:23.677866 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:13:23.689248 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:13:23.710670 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:13:23.752029 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:13:23.833401 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:13:23.994931 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:13:24.316443 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:13:24.957858 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:13:26.239161 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:13:28.800985 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:13:33.922967 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:13:42.553201 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:13:44.164237 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:14:04.645616 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:14:45.607514 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:15:18.171480 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-433480 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m51.395235036s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (172.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-433480 kubectl -- rollout status deployment/busybox: (3.879653907s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 kubectl -- exec busybox-7b57f96db7-gt226 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 kubectl -- exec busybox-7b57f96db7-j6tp5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 kubectl -- exec busybox-7b57f96db7-qgx96 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 kubectl -- exec busybox-7b57f96db7-gt226 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 kubectl -- exec busybox-7b57f96db7-j6tp5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 kubectl -- exec busybox-7b57f96db7-qgx96 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 kubectl -- exec busybox-7b57f96db7-gt226 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 kubectl -- exec busybox-7b57f96db7-j6tp5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 kubectl -- exec busybox-7b57f96db7-qgx96 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 kubectl -- exec busybox-7b57f96db7-gt226 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 kubectl -- exec busybox-7b57f96db7-gt226 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 kubectl -- exec busybox-7b57f96db7-j6tp5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 kubectl -- exec busybox-7b57f96db7-j6tp5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 kubectl -- exec busybox-7b57f96db7-qgx96 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 kubectl -- exec busybox-7b57f96db7-qgx96 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 node add --alsologtostderr -v 5
E1213 15:16:07.529633 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-433480 node add --alsologtostderr -v 5: (1m0.075126246s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-433480 status --alsologtostderr -v 5: (1.114887123s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-433480 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.097452167s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-433480 status --output json --alsologtostderr -v 5: (1.071054978s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 cp testdata/cp-test.txt ha-433480:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 cp ha-433480:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3433099566/001/cp-test_ha-433480.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 cp ha-433480:/home/docker/cp-test.txt ha-433480-m02:/home/docker/cp-test_ha-433480_ha-433480-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480-m02 "sudo cat /home/docker/cp-test_ha-433480_ha-433480-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 cp ha-433480:/home/docker/cp-test.txt ha-433480-m03:/home/docker/cp-test_ha-433480_ha-433480-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480-m03 "sudo cat /home/docker/cp-test_ha-433480_ha-433480-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 cp ha-433480:/home/docker/cp-test.txt ha-433480-m04:/home/docker/cp-test_ha-433480_ha-433480-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480-m04 "sudo cat /home/docker/cp-test_ha-433480_ha-433480-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 cp testdata/cp-test.txt ha-433480-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 cp ha-433480-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3433099566/001/cp-test_ha-433480-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 cp ha-433480-m02:/home/docker/cp-test.txt ha-433480:/home/docker/cp-test_ha-433480-m02_ha-433480.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480 "sudo cat /home/docker/cp-test_ha-433480-m02_ha-433480.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 cp ha-433480-m02:/home/docker/cp-test.txt ha-433480-m03:/home/docker/cp-test_ha-433480-m02_ha-433480-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480-m03 "sudo cat /home/docker/cp-test_ha-433480-m02_ha-433480-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 cp ha-433480-m02:/home/docker/cp-test.txt ha-433480-m04:/home/docker/cp-test_ha-433480-m02_ha-433480-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480-m04 "sudo cat /home/docker/cp-test_ha-433480-m02_ha-433480-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 cp testdata/cp-test.txt ha-433480-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 cp ha-433480-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3433099566/001/cp-test_ha-433480-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 cp ha-433480-m03:/home/docker/cp-test.txt ha-433480:/home/docker/cp-test_ha-433480-m03_ha-433480.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480 "sudo cat /home/docker/cp-test_ha-433480-m03_ha-433480.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 cp ha-433480-m03:/home/docker/cp-test.txt ha-433480-m02:/home/docker/cp-test_ha-433480-m03_ha-433480-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480-m02 "sudo cat /home/docker/cp-test_ha-433480-m03_ha-433480-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 cp ha-433480-m03:/home/docker/cp-test.txt ha-433480-m04:/home/docker/cp-test_ha-433480-m03_ha-433480-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480-m04 "sudo cat /home/docker/cp-test_ha-433480-m03_ha-433480-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 cp testdata/cp-test.txt ha-433480-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 cp ha-433480-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3433099566/001/cp-test_ha-433480-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 cp ha-433480-m04:/home/docker/cp-test.txt ha-433480:/home/docker/cp-test_ha-433480-m04_ha-433480.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480 "sudo cat /home/docker/cp-test_ha-433480-m04_ha-433480.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 cp ha-433480-m04:/home/docker/cp-test.txt ha-433480-m02:/home/docker/cp-test_ha-433480-m04_ha-433480-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480-m02 "sudo cat /home/docker/cp-test_ha-433480-m04_ha-433480-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 cp ha-433480-m04:/home/docker/cp-test.txt ha-433480-m03:/home/docker/cp-test_ha-433480-m04_ha-433480-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 ssh -n ha-433480-m03 "sudo cat /home/docker/cp-test_ha-433480-m04_ha-433480-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-433480 node stop m02 --alsologtostderr -v 5: (12.19710114s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-433480 status --alsologtostderr -v 5: exit status 7 (935.341509ms)

                                                
                                                
-- stdout --
	ha-433480
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-433480-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-433480-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-433480-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 15:17:05.742767 1340260 out.go:360] Setting OutFile to fd 1 ...
	I1213 15:17:05.742970 1340260 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:17:05.742998 1340260 out.go:374] Setting ErrFile to fd 2...
	I1213 15:17:05.743016 1340260 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:17:05.743390 1340260 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 15:17:05.743639 1340260 out.go:368] Setting JSON to false
	I1213 15:17:05.743696 1340260 mustload.go:66] Loading cluster: ha-433480
	I1213 15:17:05.743782 1340260 notify.go:221] Checking for updates...
	I1213 15:17:05.744967 1340260 config.go:182] Loaded profile config "ha-433480": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 15:17:05.745027 1340260 status.go:174] checking status of ha-433480 ...
	I1213 15:17:05.745671 1340260 cli_runner.go:164] Run: docker container inspect ha-433480 --format={{.State.Status}}
	I1213 15:17:05.779739 1340260 status.go:371] ha-433480 host status = "Running" (err=<nil>)
	I1213 15:17:05.779764 1340260 host.go:66] Checking if "ha-433480" exists ...
	I1213 15:17:05.780198 1340260 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-433480
	I1213 15:17:05.825212 1340260 host.go:66] Checking if "ha-433480" exists ...
	I1213 15:17:05.825634 1340260 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 15:17:05.825704 1340260 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-433480
	I1213 15:17:05.865156 1340260 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33923 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/ha-433480/id_rsa Username:docker}
	I1213 15:17:05.993592 1340260 ssh_runner.go:195] Run: systemctl --version
	I1213 15:17:06.002733 1340260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 15:17:06.030823 1340260 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 15:17:06.119194 1340260 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-13 15:17:06.108096874 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 15:17:06.119910 1340260 kubeconfig.go:125] found "ha-433480" server: "https://192.168.49.254:8443"
	I1213 15:17:06.119953 1340260 api_server.go:166] Checking apiserver status ...
	I1213 15:17:06.120040 1340260 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:17:06.135248 1340260 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1445/cgroup
	I1213 15:17:06.144727 1340260 api_server.go:182] apiserver freezer: "10:freezer:/docker/7aeb281299ddf8f30d58c91c3d57bcca40662e87e9e9f67e1ec4261a15904a36/kubepods/burstable/pod94a2237ac4beea5525ee4ea55feb9fbd/ee1ac2999b4700004e8a6f4bc622eafcdc2a493b36432af025ff0afcf58f7ef3"
	I1213 15:17:06.144867 1340260 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7aeb281299ddf8f30d58c91c3d57bcca40662e87e9e9f67e1ec4261a15904a36/kubepods/burstable/pod94a2237ac4beea5525ee4ea55feb9fbd/ee1ac2999b4700004e8a6f4bc622eafcdc2a493b36432af025ff0afcf58f7ef3/freezer.state
	I1213 15:17:06.155841 1340260 api_server.go:204] freezer state: "THAWED"
	I1213 15:17:06.155884 1340260 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1213 15:17:06.165080 1340260 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1213 15:17:06.165110 1340260 status.go:463] ha-433480 apiserver status = Running (err=<nil>)
	I1213 15:17:06.165121 1340260 status.go:176] ha-433480 status: &{Name:ha-433480 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 15:17:06.165139 1340260 status.go:174] checking status of ha-433480-m02 ...
	I1213 15:17:06.165466 1340260 cli_runner.go:164] Run: docker container inspect ha-433480-m02 --format={{.State.Status}}
	I1213 15:17:06.183896 1340260 status.go:371] ha-433480-m02 host status = "Stopped" (err=<nil>)
	I1213 15:17:06.183923 1340260 status.go:384] host is not running, skipping remaining checks
	I1213 15:17:06.183931 1340260 status.go:176] ha-433480-m02 status: &{Name:ha-433480-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 15:17:06.183953 1340260 status.go:174] checking status of ha-433480-m03 ...
	I1213 15:17:06.184326 1340260 cli_runner.go:164] Run: docker container inspect ha-433480-m03 --format={{.State.Status}}
	I1213 15:17:06.203770 1340260 status.go:371] ha-433480-m03 host status = "Running" (err=<nil>)
	I1213 15:17:06.203813 1340260 host.go:66] Checking if "ha-433480-m03" exists ...
	I1213 15:17:06.204146 1340260 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-433480-m03
	I1213 15:17:06.223100 1340260 host.go:66] Checking if "ha-433480-m03" exists ...
	I1213 15:17:06.223501 1340260 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 15:17:06.223552 1340260 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-433480-m03
	I1213 15:17:06.245926 1340260 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33933 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/ha-433480-m03/id_rsa Username:docker}
	I1213 15:17:06.358271 1340260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 15:17:06.373820 1340260 kubeconfig.go:125] found "ha-433480" server: "https://192.168.49.254:8443"
	I1213 15:17:06.373851 1340260 api_server.go:166] Checking apiserver status ...
	I1213 15:17:06.373893 1340260 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:17:06.387709 1340260 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1380/cgroup
	I1213 15:17:06.396699 1340260 api_server.go:182] apiserver freezer: "10:freezer:/docker/7281dbe5543b056fcc2df37f47699681c478e8ee3a1947dd4cfe5d8423fd6406/kubepods/burstable/pod6e607c51e64dbb53047b233d158fdc13/a05ae75780884989abd45b17c9da14f907fb037d7bf4c4d396589e57c1c26fca"
	I1213 15:17:06.396773 1340260 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7281dbe5543b056fcc2df37f47699681c478e8ee3a1947dd4cfe5d8423fd6406/kubepods/burstable/pod6e607c51e64dbb53047b233d158fdc13/a05ae75780884989abd45b17c9da14f907fb037d7bf4c4d396589e57c1c26fca/freezer.state
	I1213 15:17:06.405427 1340260 api_server.go:204] freezer state: "THAWED"
	I1213 15:17:06.405466 1340260 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1213 15:17:06.414121 1340260 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1213 15:17:06.414154 1340260 status.go:463] ha-433480-m03 apiserver status = Running (err=<nil>)
	I1213 15:17:06.414164 1340260 status.go:176] ha-433480-m03 status: &{Name:ha-433480-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 15:17:06.414182 1340260 status.go:174] checking status of ha-433480-m04 ...
	I1213 15:17:06.414515 1340260 cli_runner.go:164] Run: docker container inspect ha-433480-m04 --format={{.State.Status}}
	I1213 15:17:06.433302 1340260 status.go:371] ha-433480-m04 host status = "Running" (err=<nil>)
	I1213 15:17:06.433328 1340260 host.go:66] Checking if "ha-433480-m04" exists ...
	I1213 15:17:06.433660 1340260 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-433480-m04
	I1213 15:17:06.452972 1340260 host.go:66] Checking if "ha-433480-m04" exists ...
	I1213 15:17:06.453283 1340260 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 15:17:06.453330 1340260 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-433480-m04
	I1213 15:17:06.472792 1340260 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/ha-433480-m04/id_rsa Username:docker}
	I1213 15:17:06.581281 1340260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 15:17:06.595505 1340260 status.go:176] ha-433480-m04 status: &{Name:ha-433480-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (13.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-433480 node start m02 --alsologtostderr -v 5: (12.011127169s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-433480 status --alsologtostderr -v 5: (1.446515224s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (13.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.091554573s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (112.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-433480 stop --alsologtostderr -v 5: (37.600933355s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 start --wait true --alsologtostderr -v 5
E1213 15:18:21.243824 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:18:23.671499 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:18:42.552329 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:18:51.371560 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-433480 start --wait true --alsologtostderr -v 5: (1m14.868794238s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (112.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-433480 node delete m03 --alsologtostderr -v 5: (10.144044611s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-433480 stop --alsologtostderr -v 5: (36.329344653s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-433480 status --alsologtostderr -v 5: exit status 7 (116.8067ms)

                                                
                                                
-- stdout --
	ha-433480
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-433480-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-433480-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 15:20:03.147604 1355157 out.go:360] Setting OutFile to fd 1 ...
	I1213 15:20:03.147900 1355157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:20:03.147917 1355157 out.go:374] Setting ErrFile to fd 2...
	I1213 15:20:03.147923 1355157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:20:03.148200 1355157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 15:20:03.148414 1355157 out.go:368] Setting JSON to false
	I1213 15:20:03.148444 1355157 mustload.go:66] Loading cluster: ha-433480
	I1213 15:20:03.148872 1355157 config.go:182] Loaded profile config "ha-433480": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 15:20:03.148896 1355157 status.go:174] checking status of ha-433480 ...
	I1213 15:20:03.149418 1355157 cli_runner.go:164] Run: docker container inspect ha-433480 --format={{.State.Status}}
	I1213 15:20:03.149953 1355157 notify.go:221] Checking for updates...
	I1213 15:20:03.168423 1355157 status.go:371] ha-433480 host status = "Stopped" (err=<nil>)
	I1213 15:20:03.168447 1355157 status.go:384] host is not running, skipping remaining checks
	I1213 15:20:03.168463 1355157 status.go:176] ha-433480 status: &{Name:ha-433480 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 15:20:03.168491 1355157 status.go:174] checking status of ha-433480-m02 ...
	I1213 15:20:03.168801 1355157 cli_runner.go:164] Run: docker container inspect ha-433480-m02 --format={{.State.Status}}
	I1213 15:20:03.186016 1355157 status.go:371] ha-433480-m02 host status = "Stopped" (err=<nil>)
	I1213 15:20:03.186035 1355157 status.go:384] host is not running, skipping remaining checks
	I1213 15:20:03.186042 1355157 status.go:176] ha-433480-m02 status: &{Name:ha-433480-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 15:20:03.186068 1355157 status.go:174] checking status of ha-433480-m04 ...
	I1213 15:20:03.186386 1355157 cli_runner.go:164] Run: docker container inspect ha-433480-m04 --format={{.State.Status}}
	I1213 15:20:03.207106 1355157 status.go:371] ha-433480-m04 host status = "Stopped" (err=<nil>)
	I1213 15:20:03.207128 1355157 status.go:384] host is not running, skipping remaining checks
	I1213 15:20:03.207134 1355157 status.go:176] ha-433480-m04 status: &{Name:ha-433480-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (61.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1213 15:20:18.171536 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-433480 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m0.73696221s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (61.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (82.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-433480 node add --control-plane --alsologtostderr -v 5: (1m20.930002481s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-433480 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-433480 status --alsologtostderr -v 5: (1.145657324s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (82.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.093140428s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.31s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-721808 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E1213 15:23:23.671822 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-721808 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (52.303755284s)
--- PASS: TestJSONOutput/start/Command (52.31s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-721808 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-721808 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.99s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-721808 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-721808 --output=json --user=testUser: (5.992725696s)
--- PASS: TestJSONOutput/stop/Command (5.99s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-262354 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-262354 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (96.328945ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"afaa9487-d3ca-41fa-b990-0f292cb3c573","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-262354] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f25ac8da-c011-484f-b5d7-6749d085b5a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22122"}}
	{"specversion":"1.0","id":"19b58695-d996-4ddf-b666-9c3e6ccb777a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f5e01105-4524-432e-869e-313fbeab4af3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig"}}
	{"specversion":"1.0","id":"2a5f60cf-86f8-4883-9417-2b05a32b9a28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube"}}
	{"specversion":"1.0","id":"9752d00d-e261-4ea0-be77-82572e413ff4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b70ca7fe-a1b8-4428-a460-736e3426437c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2c6158b3-12b6-43d9-8b78-833394389894","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-262354" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-262354
E1213 15:23:42.553174 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.28s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-034448 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-034448 --network=: (38.004701254s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-034448" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-034448
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-034448: (2.24684859s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.28s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.47s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-062638 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-062638 --network=bridge: (35.121048334s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-062638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-062638
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-062638: (2.324344333s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.47s)

                                                
                                    
x
+
TestKicExistingNetwork (36.46s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1213 15:25:00.299226 1252934 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1213 15:25:00.333115 1252934 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1213 15:25:00.333212 1252934 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1213 15:25:00.333232 1252934 cli_runner.go:164] Run: docker network inspect existing-network
W1213 15:25:00.361473 1252934 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1213 15:25:00.361528 1252934 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1213 15:25:00.361546 1252934 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1213 15:25:00.361701 1252934 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1213 15:25:00.399035 1252934 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2ccf9a9eb2c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:9e:64:ed:f4:f2} reservation:<nil>}
I1213 15:25:00.399441 1252934 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ac7530}
I1213 15:25:00.399466 1252934 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1213 15:25:00.399526 1252934 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1213 15:25:00.484429 1252934 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-461051 --network=existing-network
E1213 15:25:18.171478 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-461051 --network=existing-network: (34.107892057s)
helpers_test.go:176: Cleaning up "existing-network-461051" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-461051
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-461051: (2.124487821s)
I1213 15:25:36.743817 1252934 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.46s)

                                                
                                    
x
+
TestKicCustomSubnet (36.17s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-422049 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-422049 --subnet=192.168.60.0/24: (33.932889329s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-422049 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-422049" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-422049
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-422049: (2.214175321s)
--- PASS: TestKicCustomSubnet (36.17s)

                                                
                                    
x
+
TestKicStaticIP (36.97s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-892153 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-892153 --static-ip=192.168.200.200: (34.552634795s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-892153 ip
helpers_test.go:176: Cleaning up "static-ip-892153" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-892153
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-892153: (2.253370481s)
--- PASS: TestKicStaticIP (36.97s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (73.46s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-450800 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-450800 --driver=docker  --container-runtime=containerd: (34.341819675s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-453827 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-453827 --driver=docker  --container-runtime=containerd: (33.361838606s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-450800
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-453827
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-453827" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-453827
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-453827: (2.237807736s)
helpers_test.go:176: Cleaning up "first-450800" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-450800
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-450800: (2.071151062s)
--- PASS: TestMinikubeProfile (73.46s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-341552 --memory=3072 --mount-string /tmp/TestMountStartserial1863984853/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-341552 --memory=3072 --mount-string /tmp/TestMountStartserial1863984853/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.341296913s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-341552 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.36s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-343275 --memory=3072 --mount-string /tmp/TestMountStartserial1863984853/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-343275 --memory=3072 --mount-string /tmp/TestMountStartserial1863984853/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.354893409s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-343275 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-341552 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-341552 --alsologtostderr -v=5: (1.702869969s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-343275 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-343275
E1213 15:28:23.672302 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-343275: (1.281691324s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.49s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-343275
E1213 15:28:25.632218 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-343275: (6.487093828s)
--- PASS: TestMountStart/serial/RestartStopped (7.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-343275 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-742835 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1213 15:28:42.552918 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:29:46.735076 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:30:18.171485 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-742835 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m48.584578336s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-742835 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-742835 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-742835 -- rollout status deployment/busybox: (3.044563555s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-742835 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-742835 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-742835 -- exec busybox-7b57f96db7-4qxwx -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-742835 -- exec busybox-7b57f96db7-zgrzp -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-742835 -- exec busybox-7b57f96db7-4qxwx -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-742835 -- exec busybox-7b57f96db7-zgrzp -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-742835 -- exec busybox-7b57f96db7-4qxwx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-742835 -- exec busybox-7b57f96db7-zgrzp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.05s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-742835 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-742835 -- exec busybox-7b57f96db7-4qxwx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-742835 -- exec busybox-7b57f96db7-4qxwx -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-742835 -- exec busybox-7b57f96db7-zgrzp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-742835 -- exec busybox-7b57f96db7-zgrzp -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-742835 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-742835 -v=5 --alsologtostderr: (27.178292757s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.88s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-742835 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 cp testdata/cp-test.txt multinode-742835:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 ssh -n multinode-742835 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 cp multinode-742835:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2814711980/001/cp-test_multinode-742835.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 ssh -n multinode-742835 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 cp multinode-742835:/home/docker/cp-test.txt multinode-742835-m02:/home/docker/cp-test_multinode-742835_multinode-742835-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 ssh -n multinode-742835 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 ssh -n multinode-742835-m02 "sudo cat /home/docker/cp-test_multinode-742835_multinode-742835-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 cp multinode-742835:/home/docker/cp-test.txt multinode-742835-m03:/home/docker/cp-test_multinode-742835_multinode-742835-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 ssh -n multinode-742835 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 ssh -n multinode-742835-m03 "sudo cat /home/docker/cp-test_multinode-742835_multinode-742835-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 cp testdata/cp-test.txt multinode-742835-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 ssh -n multinode-742835-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 cp multinode-742835-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2814711980/001/cp-test_multinode-742835-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 ssh -n multinode-742835-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 cp multinode-742835-m02:/home/docker/cp-test.txt multinode-742835:/home/docker/cp-test_multinode-742835-m02_multinode-742835.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 ssh -n multinode-742835-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 ssh -n multinode-742835 "sudo cat /home/docker/cp-test_multinode-742835-m02_multinode-742835.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 cp multinode-742835-m02:/home/docker/cp-test.txt multinode-742835-m03:/home/docker/cp-test_multinode-742835-m02_multinode-742835-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 ssh -n multinode-742835-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 ssh -n multinode-742835-m03 "sudo cat /home/docker/cp-test_multinode-742835-m02_multinode-742835-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 cp testdata/cp-test.txt multinode-742835-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 ssh -n multinode-742835-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 cp multinode-742835-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2814711980/001/cp-test_multinode-742835-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 ssh -n multinode-742835-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 cp multinode-742835-m03:/home/docker/cp-test.txt multinode-742835:/home/docker/cp-test_multinode-742835-m03_multinode-742835.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 ssh -n multinode-742835-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 ssh -n multinode-742835 "sudo cat /home/docker/cp-test_multinode-742835-m03_multinode-742835.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 cp multinode-742835-m03:/home/docker/cp-test.txt multinode-742835-m02:/home/docker/cp-test_multinode-742835-m03_multinode-742835-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 ssh -n multinode-742835-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 ssh -n multinode-742835-m02 "sudo cat /home/docker/cp-test_multinode-742835-m03_multinode-742835-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.01s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-742835 node stop m03: (1.312585783s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-742835 status: exit status 7 (546.171737ms)

                                                
                                                
-- stdout --
	multinode-742835
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-742835-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-742835-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-742835 status --alsologtostderr: exit status 7 (543.566629ms)

                                                
                                                
-- stdout --
	multinode-742835
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-742835-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-742835-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 15:31:10.416568 1408262 out.go:360] Setting OutFile to fd 1 ...
	I1213 15:31:10.416690 1408262 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:31:10.416702 1408262 out.go:374] Setting ErrFile to fd 2...
	I1213 15:31:10.416707 1408262 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:31:10.416968 1408262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 15:31:10.417163 1408262 out.go:368] Setting JSON to false
	I1213 15:31:10.417197 1408262 mustload.go:66] Loading cluster: multinode-742835
	I1213 15:31:10.417597 1408262 config.go:182] Loaded profile config "multinode-742835": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 15:31:10.417620 1408262 status.go:174] checking status of multinode-742835 ...
	I1213 15:31:10.418127 1408262 cli_runner.go:164] Run: docker container inspect multinode-742835 --format={{.State.Status}}
	I1213 15:31:10.418386 1408262 notify.go:221] Checking for updates...
	I1213 15:31:10.438079 1408262 status.go:371] multinode-742835 host status = "Running" (err=<nil>)
	I1213 15:31:10.438102 1408262 host.go:66] Checking if "multinode-742835" exists ...
	I1213 15:31:10.438414 1408262 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-742835
	I1213 15:31:10.467475 1408262 host.go:66] Checking if "multinode-742835" exists ...
	I1213 15:31:10.467776 1408262 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 15:31:10.467831 1408262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-742835
	I1213 15:31:10.485636 1408262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/multinode-742835/id_rsa Username:docker}
	I1213 15:31:10.593001 1408262 ssh_runner.go:195] Run: systemctl --version
	I1213 15:31:10.600005 1408262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 15:31:10.613545 1408262 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 15:31:10.680167 1408262 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-13 15:31:10.670895672 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 15:31:10.680770 1408262 kubeconfig.go:125] found "multinode-742835" server: "https://192.168.67.2:8443"
	I1213 15:31:10.680802 1408262 api_server.go:166] Checking apiserver status ...
	I1213 15:31:10.680846 1408262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 15:31:10.693823 1408262 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1407/cgroup
	I1213 15:31:10.702193 1408262 api_server.go:182] apiserver freezer: "10:freezer:/docker/7460cc4f099f99f6432549000654a61a1572e05d8c2b5a067d2048bf78634012/kubepods/burstable/podd5d672f32f13dadbe8238923ce025353/553b870013fa8712711e9b7798441400de6bbf65d67c0c95b139a6409edcd5df"
	I1213 15:31:10.702270 1408262 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7460cc4f099f99f6432549000654a61a1572e05d8c2b5a067d2048bf78634012/kubepods/burstable/podd5d672f32f13dadbe8238923ce025353/553b870013fa8712711e9b7798441400de6bbf65d67c0c95b139a6409edcd5df/freezer.state
	I1213 15:31:10.710555 1408262 api_server.go:204] freezer state: "THAWED"
	I1213 15:31:10.710583 1408262 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1213 15:31:10.719265 1408262 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1213 15:31:10.719294 1408262 status.go:463] multinode-742835 apiserver status = Running (err=<nil>)
	I1213 15:31:10.719304 1408262 status.go:176] multinode-742835 status: &{Name:multinode-742835 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 15:31:10.719385 1408262 status.go:174] checking status of multinode-742835-m02 ...
	I1213 15:31:10.719734 1408262 cli_runner.go:164] Run: docker container inspect multinode-742835-m02 --format={{.State.Status}}
	I1213 15:31:10.736645 1408262 status.go:371] multinode-742835-m02 host status = "Running" (err=<nil>)
	I1213 15:31:10.736670 1408262 host.go:66] Checking if "multinode-742835-m02" exists ...
	I1213 15:31:10.737011 1408262 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-742835-m02
	I1213 15:31:10.753846 1408262 host.go:66] Checking if "multinode-742835-m02" exists ...
	I1213 15:31:10.754303 1408262 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 15:31:10.754360 1408262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-742835-m02
	I1213 15:31:10.772121 1408262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34048 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/multinode-742835-m02/id_rsa Username:docker}
	I1213 15:31:10.876622 1408262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 15:31:10.889475 1408262 status.go:176] multinode-742835-m02 status: &{Name:multinode-742835-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1213 15:31:10.889508 1408262 status.go:174] checking status of multinode-742835-m03 ...
	I1213 15:31:10.889822 1408262 cli_runner.go:164] Run: docker container inspect multinode-742835-m03 --format={{.State.Status}}
	I1213 15:31:10.907548 1408262 status.go:371] multinode-742835-m03 host status = "Stopped" (err=<nil>)
	I1213 15:31:10.907574 1408262 status.go:384] host is not running, skipping remaining checks
	I1213 15:31:10.907581 1408262 status.go:176] multinode-742835-m03 status: &{Name:multinode-742835-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-742835 node start m03 -v=5 --alsologtostderr: (7.596413121s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.43s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (72.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-742835
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-742835
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-742835: (25.176236987s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-742835 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-742835 --wait=true -v=5 --alsologtostderr: (47.473423894s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-742835
--- PASS: TestMultiNode/serial/RestartKeepsNodes (72.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-742835 node delete m03: (4.99091434s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-742835 stop: (23.997394887s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-742835 status: exit status 7 (103.377459ms)

                                                
                                                
-- stdout --
	multinode-742835
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-742835-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-742835 status --alsologtostderr: exit status 7 (100.300599ms)

                                                
                                                
-- stdout --
	multinode-742835
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-742835-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 15:33:01.965549 1417041 out.go:360] Setting OutFile to fd 1 ...
	I1213 15:33:01.965756 1417041 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:33:01.965783 1417041 out.go:374] Setting ErrFile to fd 2...
	I1213 15:33:01.965805 1417041 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:33:01.966094 1417041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 15:33:01.966319 1417041 out.go:368] Setting JSON to false
	I1213 15:33:01.966378 1417041 mustload.go:66] Loading cluster: multinode-742835
	I1213 15:33:01.966461 1417041 notify.go:221] Checking for updates...
	I1213 15:33:01.967778 1417041 config.go:182] Loaded profile config "multinode-742835": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 15:33:01.967996 1417041 status.go:174] checking status of multinode-742835 ...
	I1213 15:33:01.968811 1417041 cli_runner.go:164] Run: docker container inspect multinode-742835 --format={{.State.Status}}
	I1213 15:33:01.987527 1417041 status.go:371] multinode-742835 host status = "Stopped" (err=<nil>)
	I1213 15:33:01.987549 1417041 status.go:384] host is not running, skipping remaining checks
	I1213 15:33:01.987557 1417041 status.go:176] multinode-742835 status: &{Name:multinode-742835 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 15:33:01.987588 1417041 status.go:174] checking status of multinode-742835-m02 ...
	I1213 15:33:01.987907 1417041 cli_runner.go:164] Run: docker container inspect multinode-742835-m02 --format={{.State.Status}}
	I1213 15:33:02.013558 1417041 status.go:371] multinode-742835-m02 host status = "Stopped" (err=<nil>)
	I1213 15:33:02.013579 1417041 status.go:384] host is not running, skipping remaining checks
	I1213 15:33:02.013594 1417041 status.go:176] multinode-742835-m02 status: &{Name:multinode-742835-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-742835 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1213 15:33:23.672088 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:33:42.552363 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-742835 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (50.296072674s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-742835 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.99s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-742835
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-742835-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-742835-m02 --driver=docker  --container-runtime=containerd: exit status 14 (96.567381ms)

                                                
                                                
-- stdout --
	* [multinode-742835-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-742835-m02' is duplicated with machine name 'multinode-742835-m02' in profile 'multinode-742835'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-742835-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-742835-m03 --driver=docker  --container-runtime=containerd: (33.000702489s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-742835
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-742835: exit status 80 (357.851214ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-742835 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-742835-m03 already exists in multinode-742835-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-742835-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-742835-m03: (2.081408739s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.59s)

                                                
                                    
x
+
TestPreload (116.41s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-203179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
E1213 15:35:01.247450 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:35:18.171558 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-203179 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (57.301691889s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-203179 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-203179 image pull gcr.io/k8s-minikube/busybox: (2.452451409s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-203179
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-203179: (5.999580012s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-203179 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-203179 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (47.914476914s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-203179 image list
helpers_test.go:176: Cleaning up "test-preload-203179" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-203179
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-203179: (2.501059853s)
--- PASS: TestPreload (116.41s)

                                                
                                    
x
+
TestScheduledStopUnix (111.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-352911 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-352911 --memory=3072 --driver=docker  --container-runtime=containerd: (34.509619706s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-352911 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 15:37:03.856245 1433010 out.go:360] Setting OutFile to fd 1 ...
	I1213 15:37:03.856492 1433010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:37:03.856539 1433010 out.go:374] Setting ErrFile to fd 2...
	I1213 15:37:03.856559 1433010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:37:03.856984 1433010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 15:37:03.857345 1433010 out.go:368] Setting JSON to false
	I1213 15:37:03.857608 1433010 mustload.go:66] Loading cluster: scheduled-stop-352911
	I1213 15:37:03.858062 1433010 config.go:182] Loaded profile config "scheduled-stop-352911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 15:37:03.858164 1433010 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/scheduled-stop-352911/config.json ...
	I1213 15:37:03.858403 1433010 mustload.go:66] Loading cluster: scheduled-stop-352911
	I1213 15:37:03.858564 1433010 config.go:182] Loaded profile config "scheduled-stop-352911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-352911 -n scheduled-stop-352911
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-352911 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 15:37:04.329615 1433100 out.go:360] Setting OutFile to fd 1 ...
	I1213 15:37:04.329727 1433100 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:37:04.329744 1433100 out.go:374] Setting ErrFile to fd 2...
	I1213 15:37:04.329749 1433100 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:37:04.330107 1433100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 15:37:04.330494 1433100 out.go:368] Setting JSON to false
	I1213 15:37:04.331411 1433100 daemonize_unix.go:73] killing process 1433027 as it is an old scheduled stop
	I1213 15:37:04.335857 1433100 mustload.go:66] Loading cluster: scheduled-stop-352911
	I1213 15:37:04.336340 1433100 config.go:182] Loaded profile config "scheduled-stop-352911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 15:37:04.336432 1433100 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/scheduled-stop-352911/config.json ...
	I1213 15:37:04.336626 1433100 mustload.go:66] Loading cluster: scheduled-stop-352911
	I1213 15:37:04.336795 1433100 config.go:182] Loaded profile config "scheduled-stop-352911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1213 15:37:04.342854 1252934 retry.go:31] will retry after 61.83µs: open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/scheduled-stop-352911/pid: no such file or directory
I1213 15:37:04.344024 1252934 retry.go:31] will retry after 109.691µs: open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/scheduled-stop-352911/pid: no such file or directory
I1213 15:37:04.345154 1252934 retry.go:31] will retry after 283.747µs: open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/scheduled-stop-352911/pid: no such file or directory
I1213 15:37:04.346343 1252934 retry.go:31] will retry after 344.78µs: open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/scheduled-stop-352911/pid: no such file or directory
I1213 15:37:04.347499 1252934 retry.go:31] will retry after 692.64µs: open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/scheduled-stop-352911/pid: no such file or directory
I1213 15:37:04.348622 1252934 retry.go:31] will retry after 711.418µs: open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/scheduled-stop-352911/pid: no such file or directory
I1213 15:37:04.349709 1252934 retry.go:31] will retry after 1.026561ms: open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/scheduled-stop-352911/pid: no such file or directory
I1213 15:37:04.350828 1252934 retry.go:31] will retry after 2.428172ms: open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/scheduled-stop-352911/pid: no such file or directory
I1213 15:37:04.354027 1252934 retry.go:31] will retry after 1.57177ms: open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/scheduled-stop-352911/pid: no such file or directory
I1213 15:37:04.356192 1252934 retry.go:31] will retry after 2.258253ms: open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/scheduled-stop-352911/pid: no such file or directory
I1213 15:37:04.358784 1252934 retry.go:31] will retry after 7.735461ms: open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/scheduled-stop-352911/pid: no such file or directory
I1213 15:37:04.367119 1252934 retry.go:31] will retry after 4.786968ms: open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/scheduled-stop-352911/pid: no such file or directory
I1213 15:37:04.372741 1252934 retry.go:31] will retry after 15.590684ms: open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/scheduled-stop-352911/pid: no such file or directory
I1213 15:37:04.388983 1252934 retry.go:31] will retry after 19.601274ms: open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/scheduled-stop-352911/pid: no such file or directory
I1213 15:37:04.409487 1252934 retry.go:31] will retry after 33.852462ms: open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/scheduled-stop-352911/pid: no such file or directory
I1213 15:37:04.443734 1252934 retry.go:31] will retry after 65.303134ms: open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/scheduled-stop-352911/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-352911 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-352911 -n scheduled-stop-352911
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-352911
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-352911 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 15:37:30.373228 1433777 out.go:360] Setting OutFile to fd 1 ...
	I1213 15:37:30.373412 1433777 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:37:30.373440 1433777 out.go:374] Setting ErrFile to fd 2...
	I1213 15:37:30.373458 1433777 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:37:30.373773 1433777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 15:37:30.374160 1433777 out.go:368] Setting JSON to false
	I1213 15:37:30.374309 1433777 mustload.go:66] Loading cluster: scheduled-stop-352911
	I1213 15:37:30.374714 1433777 config.go:182] Loaded profile config "scheduled-stop-352911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 15:37:30.374828 1433777 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/scheduled-stop-352911/config.json ...
	I1213 15:37:30.375099 1433777 mustload.go:66] Loading cluster: scheduled-stop-352911
	I1213 15:37:30.375258 1433777 config.go:182] Loaded profile config "scheduled-stop-352911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-352911
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-352911: exit status 7 (75.028728ms)

                                                
                                                
-- stdout --
	scheduled-stop-352911
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-352911 -n scheduled-stop-352911
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-352911 -n scheduled-stop-352911: exit status 7 (68.883495ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-352911" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-352911
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-352911: (4.782269435s)
--- PASS: TestScheduledStopUnix (111.02s)

                                                
                                    
x
+
TestInsufficientStorage (12.41s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-677515 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
E1213 15:38:23.672078 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-677515 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.796020689s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f92cff99-73e0-4476-8be4-f588a29853f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-677515] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4775722d-4f46-4280-9ca0-9c53ece5126a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22122"}}
	{"specversion":"1.0","id":"c34e62fe-1ebf-4515-be54-40dcc5473a0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5f18a5dc-63eb-4215-99a5-68387d2299fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig"}}
	{"specversion":"1.0","id":"842719ce-2e37-4f0d-a553-1ba417eb0237","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube"}}
	{"specversion":"1.0","id":"fd7d2394-0783-43ae-8542-a5bfa018994c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"0b5da3fe-de20-466a-a7a0-86bf0e771636","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b83c0af4-73c7-4594-b9aa-761aec008d95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"1317bd1e-25c6-49d6-9a76-2cbd9c39a142","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d397bcfc-04e0-417f-a28c-1ffed5a00d1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fdaab3ea-ea5b-424a-b998-4bc67258d13a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ac814c7f-4ace-406e-ad82-6c74d7385086","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-677515\" primary control-plane node in \"insufficient-storage-677515\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3be9ce67-324b-475f-82b4-63ae8837a16c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765275396-22083 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"27ecb5aa-1640-4238-b2fb-79662b5d7be4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"770a6256-88d9-4cd8-b153-45999bf4dfda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-677515 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-677515 --output=json --layout=cluster: exit status 7 (320.44546ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-677515","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-677515","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 15:38:30.410418 1435607 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-677515" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-677515 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-677515 --output=json --layout=cluster: exit status 7 (296.388797ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-677515","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-677515","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 15:38:30.706088 1435671 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-677515" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig
	E1213 15:38:30.716509 1435671 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/insufficient-storage-677515/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-677515" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-677515
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-677515: (1.994033073s)
--- PASS: TestInsufficientStorage (12.41s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (315.1s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2008797728 start -p running-upgrade-881130 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2008797728 start -p running-upgrade-881130 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (34.009662349s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-881130 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1213 15:43:23.671926 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:43:42.552690 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:45:05.634199 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:45:18.171415 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:46:26.736653 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-881130 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m37.742697488s)
helpers_test.go:176: Cleaning up "running-upgrade-881130" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-881130
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-881130: (2.025231902s)
--- PASS: TestRunningBinaryUpgrade (315.10s)

                                                
                                    
x
+
TestMissingContainerUpgrade (166s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2972280389 start -p missing-upgrade-823336 --memory=3072 --driver=docker  --container-runtime=containerd
E1213 15:38:42.552564 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2972280389 start -p missing-upgrade-823336 --memory=3072 --driver=docker  --container-runtime=containerd: (1m2.985528689s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-823336
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-823336
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-823336 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-823336 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m38.876867275s)
helpers_test.go:176: Cleaning up "missing-upgrade-823336" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-823336
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-823336: (2.106367294s)
--- PASS: TestMissingContainerUpgrade (166.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-059211 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-059211 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (96.592387ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-059211] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-059211 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-059211 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (45.20874633s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-059211 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-059211 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-059211 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (14.880242053s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-059211 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-059211 status -o json: exit status 2 (401.082692ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-059211","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-059211
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-059211: (2.289421081s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-059211 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-059211 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (5.341360082s)
--- PASS: TestNoKubernetes/serial/Start (5.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-059211 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-059211 "sudo systemctl is-active --quiet service kubelet": exit status 1 (288.530508ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-059211
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-059211: (1.315010488s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-059211 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-059211 --driver=docker  --container-runtime=containerd: (6.968300116s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-059211 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-059211 "sudo systemctl is-active --quiet service kubelet": exit status 1 (293.14494ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (53.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3128378638 start -p stopped-upgrade-886397 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3128378638 start -p stopped-upgrade-886397 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (35.002974744s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3128378638 -p stopped-upgrade-886397 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3128378638 -p stopped-upgrade-886397 stop: (1.254156511s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-886397 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-886397 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (17.192671571s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (53.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-886397
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-886397: (2.378258451s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.38s)

                                                
                                    
x
+
TestPause/serial/Start (81.64s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-582266 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E1213 15:48:23.671498 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:48:42.552459 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-582266 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m21.643379006s)
--- PASS: TestPause/serial/Start (81.64s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.18s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-582266 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-582266 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.163348096s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.18s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-582266 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-582266 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-582266 --output=json --layout=cluster: exit status 2 (339.945152ms)

                                                
                                                
-- stdout --
	{"Name":"pause-582266","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-582266","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-582266 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.88s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-582266 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.88s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.98s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-582266 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-582266 --alsologtostderr -v=5: (2.982117917s)
--- PASS: TestPause/serial/DeletePaused (2.98s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-582266
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-582266: exit status 1 (18.476645ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-582266: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-023791 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-023791 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (185.556343ms)

                                                
                                                
-- stdout --
	* [false-023791] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 15:49:44.623945 1486581 out.go:360] Setting OutFile to fd 1 ...
	I1213 15:49:44.624074 1486581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:49:44.624090 1486581 out.go:374] Setting ErrFile to fd 2...
	I1213 15:49:44.624095 1486581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 15:49:44.624360 1486581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
	I1213 15:49:44.624771 1486581 out.go:368] Setting JSON to false
	I1213 15:49:44.625673 1486581 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":27133,"bootTime":1765613851,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1213 15:49:44.625742 1486581 start.go:143] virtualization:  
	I1213 15:49:44.629371 1486581 out.go:179] * [false-023791] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 15:49:44.632393 1486581 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 15:49:44.632466 1486581 notify.go:221] Checking for updates...
	I1213 15:49:44.638542 1486581 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 15:49:44.641495 1486581 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
	I1213 15:49:44.644508 1486581 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
	I1213 15:49:44.647528 1486581 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 15:49:44.650557 1486581 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 15:49:44.653898 1486581 config.go:182] Loaded profile config "kubernetes-upgrade-098313": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 15:49:44.654051 1486581 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 15:49:44.686648 1486581 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 15:49:44.686777 1486581 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 15:49:44.742167 1486581 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 15:49:44.733092378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 15:49:44.742264 1486581 docker.go:319] overlay module found
	I1213 15:49:44.745468 1486581 out.go:179] * Using the docker driver based on user configuration
	I1213 15:49:44.748384 1486581 start.go:309] selected driver: docker
	I1213 15:49:44.748398 1486581 start.go:927] validating driver "docker" against <nil>
	I1213 15:49:44.748411 1486581 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 15:49:44.752085 1486581 out.go:203] 
	W1213 15:49:44.755040 1486581 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1213 15:49:44.757887 1486581 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-023791 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-023791

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-023791

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-023791

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-023791

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-023791

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-023791

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-023791

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-023791

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-023791

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-023791

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-023791

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-023791" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-023791" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 15:40:47 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-098313
contexts:
- context:
cluster: kubernetes-upgrade-098313
user: kubernetes-upgrade-098313
name: kubernetes-upgrade-098313
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-098313
user:
client-certificate: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/kubernetes-upgrade-098313/client.crt
client-key: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/kubernetes-upgrade-098313/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-023791

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-023791"

                                                
                                                
----------------------- debugLogs end: false-023791 [took: 3.400473186s] --------------------------------
helpers_test.go:176: Cleaning up "false-023791" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-023791
--- PASS: TestNetworkPlugins/group/false (3.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (65.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-912710 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-912710 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m5.402924538s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (65.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-912710 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [ba3f891b-398a-4f94-8dab-c1c91b6788e3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [ba3f891b-398a-4f94-8dab-c1c91b6788e3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.00480692s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-912710 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-912710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-912710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.111172161s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-912710 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-912710 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-912710 --alsologtostderr -v=3: (12.030515367s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-912710 -n old-k8s-version-912710
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-912710 -n old-k8s-version-912710: exit status 7 (90.455015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-912710 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (52.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-912710 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1213 15:55:18.172237 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-912710 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (51.625968612s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-912710 -n old-k8s-version-912710
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (52.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-qx5z4" [a34b373a-e149-4874-af7a-c6d99ceb9970] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004545074s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-qx5z4" [a34b373a-e149-4874-af7a-c6d99ceb9970] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003605184s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-912710 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-912710 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-912710 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-912710 -n old-k8s-version-912710
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-912710 -n old-k8s-version-912710: exit status 2 (338.005545ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-912710 -n old-k8s-version-912710
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-912710 -n old-k8s-version-912710: exit status 2 (331.226096ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-912710 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-912710 -n old-k8s-version-912710
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-912710 -n old-k8s-version-912710
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (80.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-270324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-270324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (1m20.603633685s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (80.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-270324 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [7a558ef4-9302-41ba-be4d-2706472cb4da] Pending
helpers_test.go:353: "busybox" [7a558ef4-9302-41ba-be4d-2706472cb4da] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [7a558ef4-9302-41ba-be4d-2706472cb4da] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004499507s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-270324 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-270324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-270324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.032741023s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-270324 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-270324 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-270324 --alsologtostderr -v=3: (12.107177884s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-270324 -n embed-certs-270324
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-270324 -n embed-certs-270324: exit status 7 (78.150614ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-270324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (52.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-270324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1213 15:58:23.672293 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:58:42.553256 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-270324 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (51.655615308s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-270324 -n embed-certs-270324
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (52.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ssqdk" [c77b6d41-e81a-40c1-9b4f-349ab6963a9b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003635382s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ssqdk" [c77b6d41-e81a-40c1-9b4f-349ab6963a9b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003381831s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-270324 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-270324 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-270324 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-270324 -n embed-certs-270324
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-270324 -n embed-certs-270324: exit status 2 (360.662343ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-270324 -n embed-certs-270324
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-270324 -n embed-certs-270324: exit status 2 (360.643004ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-270324 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-270324 -n embed-certs-270324
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-270324 -n embed-certs-270324
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-946932 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1213 15:59:53.531993 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/old-k8s-version-912710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:59:53.538364 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/old-k8s-version-912710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:59:53.550557 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/old-k8s-version-912710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:59:53.571951 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/old-k8s-version-912710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:59:53.613691 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/old-k8s-version-912710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:59:53.695301 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/old-k8s-version-912710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:59:53.857273 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/old-k8s-version-912710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:59:54.179305 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/old-k8s-version-912710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:59:54.821373 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/old-k8s-version-912710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:59:56.102810 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/old-k8s-version-912710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 15:59:58.665110 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/old-k8s-version-912710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:00:03.786783 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/old-k8s-version-912710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:00:14.028788 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/old-k8s-version-912710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:00:18.171441 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:00:34.510906 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/old-k8s-version-912710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-946932 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (1m18.986114712s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-946932 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [253b8e6c-d161-48a0-810f-480d0a8f0ca1] Pending
helpers_test.go:353: "busybox" [253b8e6c-d161-48a0-810f-480d0a8f0ca1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [253b8e6c-d161-48a0-810f-480d0a8f0ca1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003685565s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-946932 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-946932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-946932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.088595398s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-946932 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-946932 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-946932 --alsologtostderr -v=3: (12.107189117s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-946932 -n default-k8s-diff-port-946932
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-946932 -n default-k8s-diff-port-946932: exit status 7 (69.027764ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-946932 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-946932 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1213 16:01:15.472548 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/old-k8s-version-912710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:01:45.636516 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-946932 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (50.797767361s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-946932 -n default-k8s-diff-port-946932
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-tnvlj" [a63b5d3d-4434-4d5b-a259-6a3fd839d312] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003911707s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-tnvlj" [a63b5d3d-4434-4d5b-a259-6a3fd839d312] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003003402s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-946932 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-946932 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-946932 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-946932 -n default-k8s-diff-port-946932
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-946932 -n default-k8s-diff-port-946932: exit status 2 (359.181428ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-946932 -n default-k8s-diff-port-946932
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-946932 -n default-k8s-diff-port-946932: exit status 2 (344.432255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-946932 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-946932 -n default-k8s-diff-port-946932
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-946932 -n default-k8s-diff-port-946932
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-439544 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-439544 --alsologtostderr -v=3: (1.358479227s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-439544 -n no-preload-439544
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-439544 -n no-preload-439544: exit status 7 (92.11101ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-439544 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-526531 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-526531 --alsologtostderr -v=3: (1.310359701s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-526531 -n newest-cni-526531
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-526531 -n newest-cni-526531: exit status 7 (68.477512ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-526531 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-526531 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (48.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-023791 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-023791 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (48.49882369s)
--- PASS: TestNetworkPlugins/group/auto/Start (48.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-023791 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-023791 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-n8q5g" [3e2a900a-20c7-4b82-9f43-95cc875e6719] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-n8q5g" [3e2a900a-20c7-4b82-9f43-95cc875e6719] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003388411s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-023791 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-023791 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-023791 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-023791 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-023791 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (56.349083558s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-pv7th" [1ed75ca1-91f2-4b57-a21e-b139e88a7f1d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003518073s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-023791 "pgrep -a kubelet"
I1213 16:21:01.912971 1252934 config.go:182] Loaded profile config "flannel-023791": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-023791 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-jskkj" [170e355b-40cf-4453-b994-d177bc5fdb74] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-jskkj" [170e355b-40cf-4453-b994-d177bc5fdb74] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003607375s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-023791 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-023791 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-023791 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (58.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-023791 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-023791 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (58.191099991s)
--- PASS: TestNetworkPlugins/group/calico/Start (58.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-pkpdk" [9ace09af-96f6-4b3d-8353-a6932f5c3616] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-pkpdk" [9ace09af-96f6-4b3d-8353-a6932f5c3616] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003578653s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-023791 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-023791 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-bd8t2" [0d5c6e8d-1ff1-4d3d-bb77-2928053dc133] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-bd8t2" [0d5c6e8d-1ff1-4d3d-bb77-2928053dc133] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003739046s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-023791 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-023791 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-023791 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-023791 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-023791 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (59.912947624s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-023791 "pgrep -a kubelet"
I1213 16:24:10.239423 1252934 config.go:182] Loaded profile config "custom-flannel-023791": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-023791 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-h6667" [2bf0c98d-0593-47c0-b64e-f19ef4fcb793] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-h6667" [2bf0c98d-0593-47c0-b64e-f19ef4fcb793] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.018558192s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (89.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-023791 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-023791 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m29.329144279s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (89.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-023791 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-023791 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-023791 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (83.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-023791 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1213 16:24:49.050523 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:24:53.531539 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/old-k8s-version-912710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:25:01.254460 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:25:09.532193 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:25:18.171866 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:25:38.304573 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/default-k8s-diff-port-946932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-023791 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m23.20087859s)
--- PASS: TestNetworkPlugins/group/bridge/Start (83.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-78bmh" [e9a03da2-647b-449c-a319-96e0e30d81f0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003925302s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-023791 "pgrep -a kubelet"
E1213 16:25:50.493639 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/auto-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1213 16:25:50.725744 1252934 config.go:182] Loaded profile config "kindnet-023791": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-023791 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-c9clj" [38b1fe58-7a3c-4676-85c3-09bd2dcc3824] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 16:25:52.217770 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:25:52.224300 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:25:52.235740 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:25:52.257303 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:25:52.298809 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:25:52.380217 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:25:52.541884 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:25:52.863422 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:25:53.505564 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:25:54.786882 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-c9clj" [38b1fe58-7a3c-4676-85c3-09bd2dcc3824] Running
E1213 16:25:55.624218 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/flannel-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:25:55.630616 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/flannel-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:25:55.642047 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/flannel-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:25:55.663643 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/flannel-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:25:55.705064 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/flannel-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:25:55.786584 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/flannel-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:25:55.948156 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/flannel-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:25:56.269538 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/flannel-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:25:56.911928 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/flannel-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:25:57.348847 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:25:58.194230 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/flannel-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.020382247s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-023791 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-023791 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-023791 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-023791 "pgrep -a kubelet"
I1213 16:26:10.129223 1252934 config.go:182] Loaded profile config "bridge-023791": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-023791 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-ffqg2" [6782a881-1524-4dd3-bc8e-4e955a7d9d2e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 16:26:12.712572 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-ffqg2" [6782a881-1524-4dd3-bc8e-4e955a7d9d2e] Running
E1213 16:26:16.119931 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/flannel-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004912734s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-023791 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-023791 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-023791 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-023791 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1213 16:26:33.194298 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/no-preload-439544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 16:26:36.606631 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/flannel-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-023791 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m20.680417155s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-023791 "pgrep -a kubelet"
I1213 16:27:44.189835 1252934 config.go:182] Loaded profile config "enable-default-cni-023791": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-023791 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-ktl8d" [32fc32fd-3f1e-4e24-be88-11217b5f156b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-ktl8d" [32fc32fd-3f1e-4e24-be88-11217b5f156b] Running
E1213 16:27:51.958000 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/calico-023791/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.003709812s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-023791 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-023791 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-023791 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    

Test skip (38/417)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.45
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
379 TestStartStop/group/disable-driver-mounts 0.16
392 TestNetworkPlugins/group/kubenet 3.55
400 TestNetworkPlugins/group/cilium 3.93
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-115921 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-115921" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-115921
--- SKIP: TestDownloadOnlyKic (0.45s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-614298" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-614298
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-023791 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-023791

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-023791

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-023791

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-023791

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-023791

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-023791

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-023791

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-023791

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-023791

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-023791

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-023791

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-023791" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-023791" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 15:40:47 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-098313
contexts:
- context:
cluster: kubernetes-upgrade-098313
user: kubernetes-upgrade-098313
name: kubernetes-upgrade-098313
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-098313
user:
client-certificate: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/kubernetes-upgrade-098313/client.crt
client-key: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/kubernetes-upgrade-098313/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-023791

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-023791"

                                                
                                                
----------------------- debugLogs end: kubenet-023791 [took: 3.39115775s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-023791" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-023791
--- SKIP: TestNetworkPlugins/group/kubenet (3.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-023791 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-023791

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-023791

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-023791

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-023791

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-023791

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-023791

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-023791

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-023791

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-023791

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-023791

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-023791

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-023791" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-023791

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-023791

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-023791

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-023791

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-023791" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-023791" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 15:40:47 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-098313
contexts:
- context:
cluster: kubernetes-upgrade-098313
user: kubernetes-upgrade-098313
name: kubernetes-upgrade-098313
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-098313
user:
client-certificate: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/kubernetes-upgrade-098313/client.crt
client-key: /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/kubernetes-upgrade-098313/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-023791

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-023791" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-023791"

                                                
                                                
----------------------- debugLogs end: cilium-023791 [took: 3.782300742s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-023791" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-023791
--- SKIP: TestNetworkPlugins/group/cilium (3.93s)

                                                
                                    
Copied to clipboard